report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
Since 1988, DOD has relied on the BRAC process as an important means of reducing excess infrastructure to meet changing force structure needs. DOD has undergone five BRAC rounds: 1988, 1991, 1993, 1995, and 2005. Under the five BRAC rounds, DOD has closed a total of 120 major installations and implemented a number of major and minor realignment actions. Table 1 shows the number of major installation closures, major realignments, and minor closures and realignments for each of the five BRAC rounds. In accordance with the BRAC statute, DOD must complete closure and realignment actions no later than 6 years following the date that the President transmits his report on the BRAC recommendations to Congress. For BRAC 2005, the round’s completion date was September 15, 2011. The statute allows environmental cleanup and property transfer actions associated with BRAC sites to exceed the 6-year time limit, and does not set a deadline for completion. In our prior work, we have reported that the cleanup of contaminated properties has been a key factor related to delays in transferring unneeded property through the BRAC process. In conducting assessments of potential contamination and determining the extent of cleanup required on installations closed because of BRAC decisions, DOD must comply with cleanup standards and processes under all applicable environmental laws, regulations, and executive orders. The Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA), as amended, authorizes cleanup actions at federal facilities where there is a release of hazardous substances or the threat of such a release that can present a threat to public health and the environment. The Superfund Amendments and Reauthorization Act of 1986 added provisions to CERCLA specifically governing the cleanup of federal facilities, including active military installations and those closed under BRAC, and required the Secretary of Defense to carry out the Defense Environmental Restoration Program. Under the Defense Environmental Restoration Program, DOD conducts environmental restoration activities at active installations, Formerly Used Defense Site properties, and BRAC locations in the United States to address DOD contamination from hazardous substances, pollutants, or contaminants; unexploded ordnance, discarded military munitions, or munitions constituents; or building demolition and debris removal. Types of environmental contaminants found at military installations include solvents and corrosives; fuels; paint strippers and thinners; metals, such as lead, cadmium, and chromium; and unique military substances, such as nerve agents and unexploded ordnance. The program includes the investigation, identification, and cleanup of contamination from hazardous substances, pollutants, and contaminants; the correction of other environmental damage (such as detection and disposal of unexploded ordnance) that creates a threat to the public health or environment; and the demolition and removal of unsafe buildings and structures. The U.S. Environmental Protection Agency and state regulatory agencies are responsible for overseeing cleanup decisions to ensure that applicable requirements are met. In general, the services, as the owners of the property, put final cleanup remedies in place before the property is transferred. However, under some circumstances the services may conduct an early transfer before cleanup has been completed. When remedies are in place for addressing the contamination of a former installation or the services and the communities have agreed to an early transfer, the property can be transferred to a local redevelopment authority responsible for implementing a plan for civilian reuse. Figure 1 shows the major milestones in the cleanup and transfer process required under CERCLA. Since 2005, we have issued more than 30 reports and testimonies on BRAC planning, implementation, costs, and savings that highlight information DOD can use to improve the BRAC recommendation development and implementation process. For example, in 2015 we reported that a variety of homeless assistance was provided as a result of BRAC 2005 but that DOD and the Department of Housing and Urban Development do not require tracking of data about the transfer of property for homeless assistance. We recommended that DOD and the Department of Housing and Urban Development track property transfer status. DOD partially concurred with this recommendation and stated that action on this recommendation is pending the authorization of a future BRAC round. See the Related GAO Products page at the end of this report for a list of reports related to BRAC. DOD has captured and reported more-comprehensive cost information in its environmental cost reporting for installations closed under BRAC; however, DOD has not reported to Congress that the cleanup of emerging contaminants could significantly increase the total cost of environmental cleanup at installations closed by the BRAC process. DOD has improved its reporting of environmental cleanup costs to Congress by capturing more-comprehensive information that we identified as missing from its 2007 annual report. For example, we reported in 2007 that the costs for environmental cleanup for installations closed under the 2005 BRAC round were not complete partly because DOD excluded the cleanup costs for contaminants released on DOD property after 1986 and munitions released on DOD property after 2002. We recommended in our 2007 report that DOD provide all costs—past and future—required to complete environmental cleanup at each BRAC installation and to fully explain the scope and limitations of all the environmental cleanup costs that DOD reports to Congress. In response, in December 2008 DOD revised the activities eligible for the Defense Environmental Response Program by including the cleanup for all identified munitions and hazardous contaminants. In its Fiscal Year 2009 Defense Environmental Programs Annual Report to Congress, DOD began reporting cleanup costs for all identified munitions and contaminants and continued to report these costs in subsequent reports to Congress including the Defense Environmental Programs Annual Report to Congress for FY 2015. In addition, we reported in 2007 that DOD had not reported complete environmental cleanup costs on installations closed under the 2005 BRAC round partly because it omitted additional information such as program-management costs—indirect, overhead, and management costs related to the environmental cleanup that cannot be attributed to a specific installation. In response, DOD issued an update to its Defense Environmental Restoration Program in March 2012 clarifying the requirement for the services to include program-management costs in their cleanup costs estimates and including these costs in the 2015 annual report to Congress. In its annual report to Congress, DOD reported that it spent $609.6 million to clean up properties closed under BRAC in fiscal year 2015. DOD estimated as of September 30, 2015, that it will need about $3.4 billion to complete environmental cleanup for installations closed under all BRAC rounds, including $475 million in program-management costs. Through fiscal year 2015, DOD has spent approximately $11.5 billion, including $813 million in program-management costs, for environmental cleanup of installations closed under all BRAC rounds. Of this $3.4 billion, DOD will need about $435 million to complete the cleanup for installations under the 2005 BRAC round, including $427 million for major installations, and $2.5 billion to complete the cleanup for installations under the legacy rounds (installations closed under the 1988, 1991, 1993, and 1995 BRAC rounds). Table 2 shows the total environmental cleanup costs including the past and remaining costs to complete the environmental cleanup for the 23 major installations closed under BRAC 2005. Table 3 shows the 10 installations from the legacy BRAC rounds (i.e., 1988, 1991, 1993, and 1995) with the highest remaining environmental cleanup costs based on cost information from fiscal year 2015. The cost estimate of $1.2 billion to complete the environmental cleanup at these 10 BRAC installations accounts for 41 percent of the total remaining environmental cleanup costs ($2.9 billion) for installations under all BRAC rounds. DOD did not notify Congress in its most recent report in fiscal year 2015 on environmental programs that the costs for environmental cleanup at BRAC installations will significantly increase due to the high cost of remediating emerging contaminants, primarily perfluorooctane sulfonate (PFOS) and perfluorooctanoic acid (PFOA), two types of perfluorinated compounds, at many of the installations closed under BRAC. According to DOD officials, the services are still investigating the scope of the problem and the costs are not fully defined. Although DOD officials have not determined the total costs for cleaning up emerging contaminants at installations closed under BRAC, service BRAC officials stated that the cost of cleaning up perfluorinated compounds—specifically PFOS and PFOA—from installations closed under BRAC will likely be significant. For example, Air Force BRAC officials told us that the agency has programmed $100 million over the next 5 years for the investigation and remediation of emerging contaminants. Further, they stated that this amount does not include the additional cost to complete the cleanup of emerging contaminants such as PFOS and PFOA. Navy BRAC officials stated that the cleanup of PFOS and PFOA at both active and closed installations could greatly increase the environmental cleanup costs, into the billions of dollars. Also, Army BRAC officials stated that the cleanup of PFOS and PFOA is a serious issue that will greatly affect environmental cleanup costs. Service officials stated they are continuing to identify and investigate sites on installations, such as water systems, fire training activities, crash sites, and aircraft hangers that may have been contaminated with PFOS or PFOA. For example, Air Force BRAC officials stated that as of August 2016 they had identified 30 installations requiring preliminary assessment investigations for PFOS and PFOA, but as of October 2016, the Air Force anticipates that only 25 of these installations will require further investigation. Navy BRAC officials stated that as of September 2016 they had identified 35 BRAC installations with sites that are contaminated or may be potentially contaminated with PFOS and PFOA. Further, Navy BRAC officials stated that this number is likely to increase as they conduct additional investigations. An Army BRAC official stated that Army personnel have begun sampling sites at installations where PFOS and PFOA may have been used, but have not yet determined which installations may be potentially contaminated. According to service BRAC officials, it could take many years to identify new sites, investigate the soil and water system, and conduct cleanup action for removing perfluorinated compounds. However, the presence of perfluorinated compounds has already affected DOD’s ability to transfer property. For example, the transfer of property at a former Navy installation was delayed because in May 2016 levels of PFOS and PFOA that exceeded the Environmental Protection Agency’s health advisory levels were found in the local drinking water. Although the agency’s health advisory is not binding, DOD officials stated that they want to ensure that the levels of PFOS and PFOA in drinking water supplied to residents are below the advisory levels. According to DOD’s Financial Management Regulation, transparency and complete accountability in financial reporting and budgetary backup documents are essential elements for providing Congress with a more- comprehensive picture of total costs so it can make appropriate budgetary trade-off decisions to ensure the expeditious cleanup and transfer of properties and ultimately realize savings for the U.S. government. Also, DOD’s manual on the management of the defense environmental restoration program states that DOD shall improve its financial management and reporting of environmental cleanup costs by providing accurate, complete, reliable, timely, and auditable financial information. However, DOD’s annual report to Congress does not make any mention of the significant increase in costs that the department is likely to incur identifying and cleaning up these contaminants. As a result, DOD has not provided Congress with all of the available information on total cleanup costs because its annual report did not reflect that cleanup costs will significantly increase due to PFOS and PFOA on many DOD installations including those closed under BRAC. Although DOD has been investigating the presence of PFOS and PFOA at DOD installations for several years, according to DOD officials, at the time of the last report to Congress, the extent of the issue was not yet known. DOD officials acknowledge they have not provided Congress with a complete picture of total cleanup costs because DOD is working to identify locations where DOD may have a known or suspected release of PFOS and PFOA on DOD installations, including those closed under BRAC. DOD officials told us they have briefed certain committees of Congress on this issue, including the likelihood of increased costs for environmental cleanup. However, DOD has not provided the full Congress with notification through its annual report that DOD’s costs to clean up these contaminants are expected to increase significantly. While the department does not know the exact amount of the costs, DOD has an opportunity to notify Congress of the expected increase and provide its best estimate of the costs based on known information at the time of its annual report. Without DOD notifying Congress about this expected cost increase and providing the best estimate of costs for the cleanup of perfluorinated compounds and other emerging contaminants to the extent known, Congress will lack total visibility over this potentially significant BRAC environmental cleanup effort and will not have necessary information to make more-informed funding decisions. DOD has used a variety of methods since our 2007 report to continue to make progress in transfers of unneeded BRAC property. At the same time, installation officials stated that challenges remain, including unique situations that will require more time to overcome, and others that are more widespread, such as navigating multiple regulatory agencies or dealing with emerging contaminants. Installation officials indicated that lessons learned from installations that have successfully navigated these more-widespread issues are not easily obtained and if they were available it could help them and future officials facing cleanup of BRAC property expedite the cleanup and transfer of properties. DOD has reported transferring about 85 percent (490,678 out of 575,758 acres) of unneeded property identified in all BRAC rounds through a variety of methods, which is an increase from the 78 percent of property transferred before and during 2007, the year of our previous review. Regarding only the 2005 BRAC round, DOD reported that it had transferred about 66 percent (60,224 out of 90,914 acres) of its unneeded property as of September 30, 2015. Figure 2 shows the disposition of the unneeded acreage from all prior BRAC rounds. The bulk of the transfers, 71 percent (406,668 acres) were to nonfederal entities. While DOD’s goal is to transfer property to other entities for reuse, as we noted in our 2007 report, leasing property can also afford the user and DOD some benefits. As we also reported in 2007, communities, for example, can choose leasing as an interim measure while awaiting final environmental cleanup and thereby promote property reuse and job creation. DOD also benefits, in some cases, as the communities assume responsibility for costs of protecting and maintaining these leased properties. By adding leased acres to the number of transferred acres, the amount of unneeded BRAC property being reused rises to 86 percent, according to our analysis. Congress has provided DOD with a wide range of property transfer methods and tools to expedite the cleanup and transfer of unneeded BRAC property, including public benefit conveyances and negotiated property sales. The closure and realignment of individual installations creates opportunities for those unneeded properties to be made available to others for reuse. When an installation is closed under BRAC, the unneeded property is reported as excess. Federal property disposal laws require DOD to first screen excess property for possible reuse by defense and other federal agencies. If no federal agency needs the property, it is declared surplus and is made available to nonfederal parties, including state and local agencies, local redevelopment authorities, and the public, using the various transfer tools as shown in table 4. In 2007, we recommended that the services report periodically to OSD on the status and proposed strategies for transferring excess BRAC properties. In response, OSD began requiring the services to report on the status of all excess property including the available acreage and the authority used to transfer the property. In transferring its unneeded property from the previous BRAC rounds, our analysis shows that DOD primarily relies on reversions, economic development conveyances, and public benefit conveyances. Figure 3 shows the breakdown of the methods used by DOD for transferring property from all of the BRAC rounds. Improvements in funding for BRAC-related expenses and cleanup technology have expedited the cleanup process and transfer of property. For example, in 2013, Congress established a single BRAC account to fund all BRAC-related expenses for all BRAC rounds, including costs for environmental cleanup. Previously, there were two BRAC accounts—one to fund activities related to legacy BRAC rounds and one to fund activities related to the 2005 BRAC round. According to DOD and Army officials, the single account has been beneficial in funding cleanup expenses. According to Army officials, the Army was able to sell properties closed under BRAC 2005, and the funds from these sales were available to use to fund environmental cleanup. In addition, as we reported in 2007 and according to Army officials we spoke with on this review, installations closed in the BRAC 2005 round typically needed less environmental cleanup because the services had already implemented cleanup programs while the installations were active. As a result, Army officials told us there was a large surplus in the BRAC 2005 account. With the merging of the two accounts, the Army was able to use the surplus money from the 2005 BRAC account to clean up properties closed in the legacy BRAC rounds. In addition, improvements in cleanup technology have expedited environmental restoration in some locations. In our 2007 report, we reported that the technology to detect and clean up unexploded ordnance was limited and not fully effective. However, since our last report there have been advances in this technology that make the cleanup of unexploded ordnance faster and more effective. For example officials at Fort Ord, California, informed us that DOD has developed a new tool, the advanced geophysical classification technology, commonly referred to as the advanced classification. This technology enables personnel to address the potential explosives safety hazards at munitions response sites by identifying buried metal objects and determining whether they are military munitions or harmless debris. According to officials, the use of the advanced geophysical classification technology will allow DOD to reduce cleanup time and costs. Installation officials also told us about using innovative techniques for cleaning up contaminants in groundwater, including injecting the groundwater with oil or organisms to facilitate the natural attenuation of contaminants; these techniques should help expedite cleanup at BRAC locations. Installation officials and local redevelopment authority representatives involved in the cleanup, transfer, and reuse of the property identified challenges that continue to impede the environmental cleanup and transfer process. Officials told us that some of their challenges are due to unique situations that have to be addressed individually, but other challenges are more common and widespread. Installation officials told us that not having information available on previous successful mitigation strategies that may have been identified for common and widespread challenges hampers their ability to expedite the cleanup and transfer of some properties. The unique challenges installation officials told us about during our site visits result in additional time needed to conduct the environmental cleanup and ultimately to transfer the property. For example, at Fort Ord, California, the Army needs to burn the brush on the former training range area to clean up unexploded ordnances. However, to perform this controlled burn, the weather conditions need to be ideal due to concerns of the fire spreading in dry conditions. In 2015, the Army was unable to execute this burn due to unfavorable weather conditions. As of September 2016, the Army still had not been able to conduct burns at Fort Ord because of wildfires in the area that required the use of the fire resources that would be necessary to control the burns and because of dry weather conditions. Another unique challenge that results in additional time to execute cleanup is when the properties are at remote locations. For example, the Air Force can ship supplies and perform cleanup only during certain months of the year at the former Galena Forward Operating Location, Alaska, because of the severely cold weather that freezes the river used for transportation. This limitation cuts down on the ability to ship supplies and equipment to the installation for cleanup and to remove waste from the site for disposal. The Navy faces similar challenges with the cleanup at the former Naval Air Facility Adak, Alaska. Officials told us about two other challenges that are more common and widespread. First, officials at five of the seven installations we visited told us that a key challenge across installations is coordinating with the large number of regulatory agencies involved in environmental cleanup issues. For example, installation and local redevelopment authority officials from California stated that there are multiple environmental agencies in California, each with its own area of concern, and that the agencies are not always easy to coordinate with. In addition, some states, including California and New Jersey, have stricter standards than the Environmental Protection Agency has on certain contaminants, and the services must clean up the contaminated property to the stricter state level. Further, certain states, including Alaska and California, do not allow disposal or storage of radiological contaminants in the state, and these contaminants must be shipped to another state. For example, Navy BRAC officials stated that they dispose of and transport radiological waste from cleanup operations at Treasure Island Hunters Point, California—closed under the 1991 BRAC round—to a Utah landfill designated to receive this kind of waste. According to Navy BRAC officials, the additional environmental cleanup costs for disposing of and transporting radiological waste to the Utah landfill versus keeping the waste in California is over $1 million per year. Secondly, officials told us that the newly discovered presence of emerging contaminants is a challenge that has delayed the transfer of property and extended the timeline for cleanup in some locations, especially former airfields. The most common emerging contaminants on DOD installations are perfluorinated compounds, specifically PFOS and PFOA, found in fire-fighting foam used throughout the nation by the military and commercial airports. In 2009, the Environmental Protection Agency issued a provisional health advisory establishing levels at 400 parts per trillion for PFOA and 200 parts per trillion for PFOS in drinking water. In May 2016, EPA replaced the provisional advisories with a lifetime health advisory for PFOS and PFOA of 70 parts per trillion in drinking water. This level applies both to the chemicals individually and to the concentrations of both, when combined. Although Environmental Protection Agency health advisories are nonenforceable, DOD issued an instruction in 2009 stating that DOD will perform sampling, conduct site- specific risk assessments, and take actions for emerging contaminants released from DOD facilities. According to DOD officials, DOD is in the beginning stages of determining the extent of this problem, and has discovered levels of PFOS and PFOA that exceed the Environmental Protection Agency’s health advisory level at numerous installations, including Naval Air Station Joint Reserve Base Willow Grove, Pennsylvania, and Pease Air Force Base, New Hampshire. Installation officials we spoke with stated that they periodically reach out to officials at other installations informally, even across services, for help in learning how to expedite or resolve these challenges as well as others, but there is no formal mechanism, such as a web-based database, within DOD to capture this type of information. Officials stated that a system to capture lessons learned would assist them in addressing challenges at their installations and would be important for any future BRAC rounds, as current personnel may no longer be employed within the department. One example of an innovative approach to dealing with the disposal of radioactive material is at the former McClellan Air Force Base, California. In 2007, we reported that traces of plutonium were found during a routine cleanup in September 2000, causing a cost increase of $21 million and extending the completion schedule beyond 2030. However, since 2007, officials told us that they realized a better way to decrease the costs of disposing of radioactive material by using a consolidation landfill on the installation. According to installation officials, the Air Force will save over $200 million by disposing of this material in this manner rather than shipping the material to a disposal facility in another state. Figure 4 shows the consolidation landfill at the former McClellan Air Force Base. DOD has created a joint lessons-learned program, and the guidance for this program states that recording, analyzing, and developing improved processes, procedures, and methods based on lessons learned are primary tools in developing improvements in overall performance. Further, the Office of Management and Budget’s guidance on the preparation, submission, and execution of the budget states that agencies should consider lessons learned from past efforts to continuously improve service delivery and resolve management challenges. Although this guidance is not specific to environmental cleanup, it provides good examples of how lessons-learned programs can be structured. Further, Standards for Internal Control in the Federal Government state that communication throughout an entity is an important part of achieving the entity’s objectives and includes the communication of quality information to all levels of the entity. The Office of the Deputy Assistant Secretary of Defense for Environment, Safety, and Occupational Health currently oversees a Cleanup Committee composed of senior environmental officials from each of the services. The purpose of this committee is to communicate, discuss, and resolve cleanup issues. According to DOD officials, this group meets regularly to share lessons learned and improve DOD policy. However, installation officials do not attend these meetings and do not have a mechanism to routinely record and share information on lessons learned, successful strategies, helpful contacts, or insight they have gained from previous DOD cleanups. Installation officials we spoke with indicated that a repository or method to record and share this type of information would be beneficial to others who face similar challenges in the future. Without a repository or method to record and share lessons learned, installation personnel charged with implementing cleanup efforts are missing opportunities to share lessons learned about how various locations have successfully addressed cleanup challenges and may therefore be at risk of duplicating errors made in the past by others who faced the same kind of cleanup issues. The cleanup of environmental contaminants on installations closed under BRAC has historically been a key impediment to the transfer and ultimate reuse of the property by the community. Environmental cleanup is costly, and DOD estimates that it will need about $3.4 billion in addition to the approximately $11.5 billion it has already spent to manage and complete environmental cleanup of installations closed under all BRAC rounds. Since 2007, DOD has improved its reporting of these cleanup costs to Congress by including in its annual reports information on how these additional costs have contributed to the overall cost increases DOD has experienced in implementing the BRAC recommendations. Despite these improvements, DOD has not reported to Congress how the cleanup of emerging contaminants, especially certain perfluorinated compounds, at installations closed under BRAC will significantly increase the estimated cleanup costs. Without including such information in its annual report, DOD has not provided Congress full visibility over the expected increase in costs and the necessary information to make more-informed funding decisions. DOD has also made progress in transferring property closed under BRAC; however, officials identified several challenges in the cleanup and transfer process, and some of these challenges may be aided by sharing information from others who have successfully developed mitigation strategies or navigated the complex regulatory environment. Without a repository or method to record and share lessons learned, installation personnel charged with implementing cleanup efforts may face unnecessary delays or obstacles to cleanup issues that could have been alleviated from the valuable insight gained by prior DOD experience with these issues. To provide Congress with better visibility over the costs for the environmental cleanup of properties from all BRAC rounds to inform future funding decisions, we recommend that the Secretary of Defense direct the Secretaries of the military departments to include in future annual reports to Congress that environmental cleanup costs will increase due to the cleanup of perfluorinated compounds and other emerging contaminants, and to include best estimates of these costs as additional information becomes available. To help the services more effectively share information and address environmental cleanups and transfers, we recommend that the Secretary of Defense direct the Secretaries of the military departments to create a repository or method to record and share lessons learned about how various locations have successfully addressed cleanup challenges. We provided a draft of this report to DOD for review and comment. In its written comments, DOD concurred with our recommendations. DOD also provided technical comments which we incorporated as appropriate. DOD’s comments are printed in their entirety in appendix II. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; and the Assistant Secretary of Defense for Energy, Installations, and Environment. In addition, the report is available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To determine the extent to which the Department of Defense (DOD) has made progress in capturing and reporting environmental cleanup costs at installations closed under all prior Base Realignment and Closure (BRAC) rounds, we collected cost data from the Office of the Deputy Assistant Secretary of Defense for Environment, Safety, and Occupational Health’s Knowledge-Based Corporate Reporting System as of September 30, 2015. We specifically analyzed information from the database on environmental cleanup costs from fiscal year 2015—both past and future—for all installations closed under the 2005 BRAC round as well as the costs for installations closed under the legacy rounds (installations closed in BRAC rounds 1988, 1991, 1993, and 1995). We interviewed DOD officials with knowledge about these data to gather information to assess the reliability of the data. We determined these data to be sufficiently reliable for the purpose of presenting costs DOD has identified. We reviewed and examined DOD regulations and guidance on requirements for estimating and reporting costs, such as DOD’s Financial Management Regulation and DOD Manual 4715.20, Defense Environmental Restoration Program (DERP) Management. In addition, we interviewed officials with the Office of the Deputy Assistant Secretary of Defense for Environment, Safety, and Occupational Health and the services to gain an understanding of how the estimates were derived from the services and reported to Congress. We also examined the Defense Environmental Programs Annual Report to Congress for Fiscal Year 2015—the most-recent report—to determine what environmental cleanup costs and information were last reported to Congress. We compared DOD’s efforts to capture and report on environmental cleanup costs in its annual report to Congress on environmental cleanup to criteria, such as costs being transparent and complete, in DOD’s financial-management regulations and DOD’s manual on the management of the Defense Environmental Restoration Program in conducting our analysis of the data. Furthermore, we interviewed key officials with knowledge of BRAC’s cost reporting and estimating from DOD and the services to determine the extent to which DOD has made improvements in reporting these data and identified any omissions in the total costs reported or estimated for the future. To determine the extent to which DOD has made progress in transferring excess property, we collected and analyzed data from the Office of Economic Adjustment as of September 30, 2015, and interviewed officials with knowledge of these data to gather information to assess the reliability of the data. We found these data to be sufficiently reliable for our purposes. To identify any processes or procedures that DOD and the services have implemented to expedite the cleanup and transfer process and challenges that continue to hamper progress, we interviewed officials from the Office of the Deputy Assistant Secretary of Defense for Environment, Safety, and Occupational Health and the three military departments. In addition, we selected seven sites at which to interview installation officials responsible for the caretaking and cleanup of the installations as well as representatives from the local redevelopment authorities. Our site selection was based on three criteria: (1) a mix of installations that were closed under legacy BRAC rounds and installations closed under BRAC 2005, (2) installations from among those with the highest environmental cleanup costs, and (3) installations that represented all three of the military departments. We visited four legacy BRAC round closures: McClellan Air Force Base, California; Mather Air Force Base, California; Fort Ord, California; and Treasure Island Naval Station Hunters Point Annex, California; and three 2005 BRAC round closures: Fort Monmouth, New Jersey; Naval Air Station Joint Reserve Base Willow Grove, Pennsylvania; and Brooks City-Base, Texas. We also observed sites at the BRAC installations we visited to determine the status of DOD’s cleanup efforts and the transfer of properties. To provide further insight into cleanup and transfer issues, we also conducted phone interviews with representatives from additional local redevelopment authorities, also selected based on environmental cleanup costs and service and geographical representation. In these site visits and telephone interviews, we also discussed the extent to which installation officials shared lessons learned with other installation officials. We reviewed this information in light of DOD’s and the Office of Management and Budget’s guidance on lessons learned as well as our Standards for Internal Control in the Federal Government. During our review, we interviewed the following offices involved with the management and implementation of the environmental cleanup and transfer of property for BRAC installations: Assistant Secretary of Defense (Energy, Installations, and Environment) Office of the Deputy Assistant Secretary of Defense for Environment, Safety, and Occupational Health, Arlington, Virginia. Office of Economic Adjustment, Arlington, Virginia Army Office of the Assistant Chief of Staff of Installation Management, BRAC Division, Arlington, Virginia Former Fort Ord, California Former Fort Monmouth, New Jersey Navy BRAC Program Management Office, Arlington, Virginia Former Treasure Island Naval Station Hunters Point, California Former Naval Air Station Joint Reserve Base Willow Grove, Pennsylvania Office of the Assistant Secretary of the Air Force (Installations, Environment & Energy), Air Force BRAC Management Office, Arlington, Virginia Air Force Civil Engineering Center, Lackland Air Force Base, Texas Former McClellan Air Force Base, California Former Mather Air Force Base, California Former Brooks City-Base, Texas Alameda, California (Former Alameda Naval Air Station, California) Brooks Development Authority, San Antonio, Texas (Former Brooks City-Base, Texas) Concord, California (Former Concord Naval Weapons Station, California) Fort Ord Reuse Authority, Monterey, California (Former Fort Ord, California) Fort McClellan Authority, Anniston, Alabama (Former Fort McClellan, Alabama) Fort Monmouth Economic Revitalization Authority, Oceanport, New Jersey (Former Fort Monmouth, New Jersey) Fort Monroe Authority, Hampton, Virginia (Former Fort Monroe, Virginia) Horsham Land Redevelopment Authority, Philadelphia, Pennsylvania (Former Naval Air Station Joint Reserve Base Willow Grove, Pennsylvania) Office of Community Investment and Infrastructure, San Francisco, California (Former Treasure Island Hunters Point, California) Sacramento County, California (Former McClellan Air Force Base and former Mather Air Force Base) Seneca County Industrial Development Agency, Waterloo, New York (Former Seneca Army Depot, New York) Vallejo, California (Former Mare Island Naval Shipyard, California) We conducted this performance audit from January 2016 to January 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Laura Durland (Assistant Director), Tracy Barnes, Leslie Bharadwaja, Tracy Burney, Michele Fejfar, Amie Lesser, and Richard Powelson made key contributions to this report. Military Base Realignments and Closures: More Guidance and Information Needed to Take Advantage of Opportunities to Consolidate Training. GAO-16-45. Washington, D.C.: February 18, 2016. Military Base Realignments and Closures: Process for Reusing Property for Homeless Assistance Needs Improvements. GAO-15-274. Washington, D.C.: March 16, 2015. DOD Joint Bases: Implementation Challenges Demonstrate Need to Reevaluate the Program. GAO-14-577. Washington, D.C.: September 19, 2014. Defense Health Care Reform: Actions Needed to Help Realize Potential Cost Savings from Medical Education and Training. GAO-14-630. Washington, D.C.: July 31, 2014. Defense Infrastructure: Communities Need Additional Guidance and Information to Improve Their Ability to Adjust to DOD Installation Closure or Growth. GAO-13-436. Washington, D.C.: May 14, 2013. Military Bases: Opportunities Exist to Improve Future Base Realignment and Closure Rounds. GAO-13-149. Washington, D.C.: March 7, 2013. DOD Joint Bases: Management Improvements Needed to Achieve Greater Efficiencies. GAO-13-134. Washington, D.C.: November 15, 2012. Military Base Realignments and Closures: The National Geospatial- Intelligence Agency’s Technology Center Construction Project. GAO-12-770R. Washington, D.C.: June 29, 2012. Military Base Realignments and Closures: Updated Costs and Savings Estimates from BRAC 2005. GAO-12-709R. Washington, D.C.: June 29, 2012. Defense Health Care: Applying Key Management Practices Should Help Achieve Efficiencies within the Military Health System. GAO-12-224. Washington, D.C.: April 12, 2012. Military Base Realignments and Closures: Key Factors Contributing to BRAC 2005 Results. GAO-12-513T. Washington, D.C.: March 8, 2012. Excess Facilities: DOD Needs More Complete Information and a Strategy to Guide Its Future Disposal Efforts. GAO-11-814. Washington, D.C.: September 19, 2011. Military Base Realignments and Closures: Review of the Iowa and Milan Army Ammunition Plants. GAO-11-488R. Washington, D.C.: April 1, 2011. GAO’s 2011 High-Risk Series: An Update. GAO-11-394T. Washington, D.C.: February 17, 2011. Defense Infrastructure: High-Level Federal Interagency Coordination Is Warranted to Address Transportation Needs beyond the Scope of the Defense Access Roads Program. GAO-11-165. Washington, D.C.: January 26, 2011. Military Base Realignments and Closures: DOD Is Taking Steps to Mitigate Challenges but Is Not Fully Reporting Some Additional Costs. GAO-10-725R. Washington, D.C.: July 21, 2010. Defense Infrastructure: Army Needs to Improve Its Facility Planning Systems to Better Support Installations Experiencing Significant Growth. GAO-10-602. Washington, D.C.: June 24, 2010. Military Base Realignments and Closures: Estimated Costs Have Increased While Savings Estimates Have Decreased Since Fiscal Year 2009. GAO-10-98R. Washington, D.C.: November 13, 2009. Military Base Realignments and Closures: Transportation Impact of Personnel Increases Will Be Significant, but Long-Term Costs Are Uncertain and Direct Federal Support Is Limited. GAO-09-750. Washington, D.C.: September 9, 2009. Military Base Realignments and Closures: DOD Needs to Update Savings Estimates and Continue to Address Challenges in Consolidating Supply- Related Functions at Depot Maintenance Locations. GAO-09-703. Washington, D.C.: July 9, 2009. Defense Infrastructure: DOD Needs to Periodically Review Support Standards and Costs at Joint Bases and Better Inform Congress of Facility Sustainment Funding Uses. GAO-09-336. Washington, D.C.: March 30, 2009. Military Base Realignments and Closures: DOD Faces Challenges in Implementing Recommendations on Time and Is Not Consistently Updating Savings Estimates. GAO-09-217. Washington, D.C.: January 30, 2009. Military Base Realignments and Closures: Army Is Developing Plans to Transfer Functions from Fort Monmouth, New Jersey, to Aberdeen Proving Ground, Maryland, but Challenges Remain. GAO-08-1010R. Washington, D.C.: August 13, 2008. Defense Infrastructure: High-Level Leadership Needed to Help Communities Address Challenges Caused by DOD-Related Growth. GAO-08-665. Washington, D.C.: June 17, 2008. Defense Infrastructure: DOD Funding for Infrastructure and Road Improvements Surrounding Growth Installations. GAO-08-602R. Washington, D.C.: April 1, 2008. Military Base Realignments and Closures: Higher Costs and Lower Savings Projected for Implementing Two Key Supply-Related BRAC Recommendations. GAO-08-315. Washington, D.C.: March 5, 2008. Defense Infrastructure: Realignment of Air Force Special Operations Command Units to Cannon Air Force Base, New Mexico. GAO-08-244R. Washington, D.C.: January 18, 2008. Military Base Realignments and Closures: Estimated Costs Have Increased and Estimated Savings Have Decreased. GAO-08-341T. Washington, D.C.: December 12, 2007. Military Base Realignments and Closures: Cost Estimates Have Increased and Are Likely to Continue to Evolve. GAO-08-159. Washington, D.C.: December 11, 2007. Military Base Realignments and Closures: Impact of Terminating, Relocating, or Outsourcing the Services of the Armed Forces Institute of Pathology. GAO-08-20. Washington, D.C.: November 9, 2007. Military Base Realignments and Closures: Transfer of Supply, Storage, and Distribution Functions from Military Services to Defense Logistics Agency. GAO-08-121R. Washington, D.C.: October 26, 2007. Defense Infrastructure: Challenges Increase Risks for Providing Timely Infrastructure Support for Army Installations Expecting Substantial Personnel Growth. GAO-07-1007. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Plan Needed to Monitor Challenges for Completing More Than 100 Armed Forces Reserve Centers. GAO-07-1040. Washington, D.C.: September 13, 2007. Military Base Realignments and Closures: Observations Related to the 2005 Round. GAO-07-1203R. Washington, D.C.: September 6, 2007. Military Base Closures: Projected Savings from Fleet Readiness Centers Likely Overstated and Actions Needed to Track Actual Savings and Overcome Certain Challenges. GAO-07-304. Washington, D.C.: June 29, 2007. Military Base Closures: Management Strategy Needed to Mitigate Challenges and Improve Communication to Help Ensure Timely Implementation of Air National Guard Recommendations. GAO-07-641. Washington, D.C.: May 16, 2007. Military Base Closures: Opportunities Exist to Improve Environmental Cleanup Cost Reporting and to Expedite Transfer of Unneeded Property. GAO-07-166. Washington, D.C.: January 30, 2007. Military Bases: Observations on DOD’s 2005 Base Realignment and Closure Selection Process and Recommendations. GAO-05-905. Washington, D.C.: July 18, 2005. Military Bases: Analysis of DOD’s 2005 Selection Process and Recommendations for Base Closures and Realignments. GAO-05-785. Washington, D.C.: July 1, 2005. Military Base Closures: Observations on Prior and Current BRAC Rounds. GAO-05-614. Washington, D.C.: May 3, 2005. Military Base Closures: Assessment of DOD’s 2004 Report on the Need for a Base Realignment and Closure Round. GAO-04-760. Washington, D.C.: May 17, 2004.
The environmental cleanup of bases closed under the BRAC process has historically been an impediment to the expeditious transfer of unneeded property to other federal and nonfederal parties. While DOD is obligated to ensure that former installation property is cleaned up to a level that is protective of human health and the environment, the cleanup process can delay redevelopment in communities affected by the BRAC process. The House Report accompanying the fiscal year 2016 Military Construction, Veterans Affairs, and Related Agencies Appropriations Bill includes a provision for GAO to update its 2007 report on the environmental cleanup and transfer of installations closed under BRAC. This report addresses the extent to which DOD has made progress in: (1) capturing and reporting environmental cleanup costs at installations closed under BRAC and (2) transferring excess property and mitigating any challenges. GAO reviewed DOD guidance, cost data, and property transfer data; visited installations selected from among those with the highest cleanup costs, as well as other factors; and interviewed DOD and service officials. The Department of Defense (DOD) has captured and reported more comprehensive cost information in its environmental cost reporting for installations closed under the Base Realignment and Closure (BRAC) process since GAO last reported on the issue in 2007. For example, GAO reported in 2007 that the costs DOD reported for environmental cleanup for installations closed under the 2005 BRAC round were not complete; however, since fiscal year 2009, DOD's annual reports to Congress on environmental cleanup have included cleanup costs for all identified munitions and contaminants. For example, DOD estimated as of September 30, 2015, that it will need about $3.4 billion to complete environmental cleanup for installations closed under all BRAC rounds, in addition to the approximately $11.5 billion it has already spent. Despite this improvement in reporting, DOD has not reported to Congress in its annual report that the removal of certain emerging contaminants (i.e., contaminants that have a reasonable possible pathway to enter the environment, present a potential unacceptable human health or environmental risk, and do not have regulatory standards based on peer-reviewed science) will be significant. Without DOD including in its annual report to Congress its best estimate of these increased costs, Congress will not have visibility into the significant costs and efforts associated with the cleanup of emerging contaminants on BRAC installations and therefore will not have the necessary information to make more informed funding decisions. DOD has used a variety of methods since GAO's 2007 report to continue to make progress in transfers of unneeded BRAC property. For example, as of September 30, 2015, DOD reported that it had transferred about 85 percent of its unneeded property identified in all BRAC rounds (see figure below). Despite this progress, installation officials stated that they continue to face challenges, such as navigating multiple regulatory agencies or disposing of radiological contamination, that increased the time it takes to clean up and transfer property. Installation officials GAO spoke with stated that they periodically reach out to officials at other installations, and across services, for help in learning how to expedite or resolve challenges, but there is no formal mechanism within DOD to capture and share this type of information. Installation officials further stated that a system to capture lessons learned would assist them in this effort. Without a mechanism to record and share lessons learned, installation personnel charged with implementing cleanup efforts are missing opportunities to share information and could duplicate errors made in the past. GAO recommends that (1) DOD include in future reports to Congress that the cleanup of emerging contaminants will increase cleanup costs, and estimate such costs, and (2) share best practices on mitigating cleanup and property transfer challenges. DOD concurred with GAO's recommendations.
DHS’s efforts to strengthen and integrate its management functions have resulted in progress addressing our criteria for removal from the high-risk list. In particular, in our February 2015 high-risk update report, we found that DHS had met two criteria and partially met the remaining three criteria. DHS subsequently met an additional criterion—establishing a framework for monitoring progress—and therefore as of March 2016, has met three criteria and partially met the remaining two criteria, as shown in table 1. Leadership commitment (met). We found in our 2015 report and have observed over the last year that the Secretary and Deputy Secretary of Homeland Security, the Under Secretary for Management at DHS, and other senior officials have continued to demonstrate exemplary commitment and top leadership support for addressing the department’s management challenges. Additionally, they have taken actions to institutionalize this commitment to help ensure the long-term success of the department’s efforts. For example, in April 2014, the Secretary of Homeland Security issued a memorandum entitled Strengthening Departmental Unity of Effort, committing to, among other things, improving DHS’s planning, programming, budgeting, and execution processes through strengthened departmental structures and increased capability. Senior DHS officials, including the Deputy Secretary and Under Secretary for Management, have also routinely met with us over the past 7 years to discuss the department’s plans and progress in addressing this high-risk area, most recently in February 2016. Throughout this time, we provided specific feedback on the department’s efforts. We concluded in our 2015 report and continue to believe that it will be important for DHS to maintain its current level of top leadership support and commitment to ensure continued progress in successfully executing its corrective actions through completion. Action plan (met). We found that DHS has established a plan for addressing this high-risk area. Specifically, in a September 2010 letter to DHS, we identified and DHS agreed to achieve 31 actions and outcomes that are critical to addressing the challenges within the department’s management areas and in integrating those functions across the department. In March 2014, we updated the actions and outcomes in collaboration with DHS to reduce overlap and ensure their continued relevance and appropriateness. These updates resulted in a reduction from 31 to 30 total actions and outcomes. Toward achieving the actions and outcomes, DHS issued its initial Integrated Strategy for High Risk Management in January 2011 and has since provided updates to its strategy in nine later versions, most recently in January 2016. The integrated strategy includes key management initiatives and related corrective actions plans for addressing DHS’s management challenges and the actions and outcomes we identified. For example, the January 2016 strategy update includes an initiative focused on financial systems modernization and an initiative focused on IT human capital management. These initiatives support various actions and outcomes, such as modernizing the U.S. Coast Guard’s financial management system and implementing an IT human capital strategic plan, respectively. We concluded in our 2015 report and continue to believe that DHS’s strategy and approach to continuously refining actionable steps to implementing the outcomes, if implemented effectively and sustained, should provide a path for DHS to be removed from our high- risk list. Capacity (partially met). In October 2014, DHS identified that it had resources needed to implement 7 of the 11 initiatives the department had under way to achieve the actions and outcomes, but did not identify sufficient resources for the 4 remaining initiatives. In addition, our prior work has identified specific capacity gaps that could undermine achievement of management outcomes. For example, in April 2014, we reported that DHS needed to increase its cost-estimating capacity and that the department had not approved baselines for 21 of 46 major acquisition programs. These baselines—which establish cost, schedule, and capability parameters—are necessary to accurately assess program performance. Thus, in our 2015 report, we concluded that DHS needs to continue to identify resources for the remaining initiatives; work to mitigate shortfalls and prioritize initiatives, as needed; and communicate to senior leadership critical resource gaps. In its January 2016 strategy update, DHS stated that the department had addressed capacity shortcomings it previously identified and self-assesses the capacity criterion as fully met. We are analyzing DHS’s capacity to resolve risks as part of our ongoing assessment of the department’s progress in addressing the Strengthening DHS Management Functions high-risk area and will report on our findings in our 2017 high-risk update. Monitoring (met). In our 2015 report we found that DHS established a framework for monitoring its progress in implementing the integrated strategy it identified for addressing the 30 actions and outcomes. In the June 2012 update to the Integrated Strategy for High Risk Management, DHS included, for the first time, performance measures to track its progress in implementing all of its key management initiatives. DHS continued to include performance measures in its October 2014 update. However, in our 2015 report we also found that DHS could strengthen its financial management monitoring efforts and thus concluded that the department had partially met the criterion for establishing a framework to monitor progress. In particular, according to DHS officials, as of November 2014, they were establishing a monitoring program that would include assessing whether financial management systems modernization projects for key components that DHS plans to complete in 2019 are following industry best practices and meet users’ needs. However, DHS had not yet entered into a contract for independent verification and validation services in connection with its efforts to establish this program. Effective implementation of these modernization projects is important because, until they are complete, the department’s current systems will not effectively support financial management operations. DHS subsequently entered into a contract for independent verification and validation services that should help ensure the financial management systems modernization projects meet key requirements. As a result, we currently assess DHS to have met the framework to monitor progress criterion. As we concluded in our 2015 report and continue to believe, moving forward, DHS will need to closely track and independently validate the effectiveness and sustainability of its corrective actions and make midcourse adjustments, as needed. Demonstrated progress (partially met). We found in our 2015 report that DHS had made important progress in strengthening its management functions, but needed to demonstrate sustainable, measurable progress in addressing key challenges that remain within and across these functions. In particular, we found that DHS had implemented a number of actions demonstrating the department’s progress in strengthening its management functions. For example, DHS had strengthened its enterprise architecture program (or blueprint) to guide and constrain IT acquisitions and obtained a clean opinion on its financial statements for 2 consecutive years, fiscal years 2013 and 2014. DHS has continued to demonstrate important progress over the last year by, for example, obtaining a clean opinion on its financial statements for a third consecutive year. However, we also found in our 2015 report that DHS continued to face significant management challenges that hindered the department’s ability to accomplish its missions. For example, DHS did not have the acquisition management tools in place to consistently demonstrate whether its major acquisition programs were on track to achieve their cost, schedule, and capability goals. In addition, DHS did not have modernized financial management systems, which affected its ability to have ready access to reliable information for informed decision making. Over the last year DHS has continued to take actions to address these challenges but they persist, as discussed in more detail below. As we concluded in our 2015 report, addressing these and other management challenges will be a significant undertaking that will likely require several years, but will be critical for the department to mitigate the risks that management weaknesses pose to mission accomplishment. Key to addressing the department’s management challenges is DHS demonstrating the ability to achieve sustained progress across the 30 actions and outcomes we identified and DHS agreed were needed to address the high-risk area. Achieving sustained progress across the actions and outcomes, in turn, requires leadership commitment, effective corrective action planning, adequate capacity (that is, the people and other resources), and monitoring the effectiveness and sustainability of supporting initiatives. The 30 actions and outcomes address each management function (acquisition, IT, financial, and human capital) as well as management integration. For example, one of the outcomes focusing on acquisition management is validating required acquisition documents in accordance with a department-approved, knowledge-based acquisition process, and the outcomes focusing on financial management include sustaining clean audit opinions for at least 2 consecutive years on department-wide financial statements and internal controls. We found in our 2015 report that DHS had made important progress across all of its management functions and significant progress in the area of management integration. In particular, DHS had made important progress in several areas to fully address 9 actions and outcomes, 5 of which it had sustained as fully implemented for at least 2 years. For instance, DHS fully met 1 outcome for the first time by obtaining a clean opinion on its financial statements for 2 consecutive years and sustained full implementation of another outcome by continuing to use performance measures to assess progress made in achieving department-wide management integration. Since our 2015 update, DHS fully addressed another outcome as a result of actions it has taken to improve the maturity of its system acquisition management practices, leaving additional work to fully address the remaining 20 outcomes. As of March 2016, DHS has also mostly addressed an additional 7 actions and outcomes—3 since our 2015 report—meaning that a small amount of work remains to fully address them. We also found in our 2015 report and continue to observe, that considerable work remains, however, in several areas for DHS to fully achieve the remaining actions and outcomes and thereby strengthen its management functions. Specifically, as of March 2016, DHS has partially addressed 9 and initiated 4 of the actions and outcomes. As previously mentioned, addressing some of these actions and outcomes, such as modernizing the department’s financial management systems, are significant undertakings that will likely require multiyear efforts. Table 2 summarizes DHS’s progress in addressing the 30 actions and outcomes as of March 2016, and is followed by selected examples. Acquisition management. DHS has fully addressed 1 of the 5 acquisition management outcomes, mostly addressed 2 outcomes, partially addressed 1 outcome, and initiated actions to address the remaining outcome. For example, DHS has taken a number of actions to fully address establishing effective component-level acquisition capability. These actions include initiating (1) monthly Component Acquisition Executive staff forums to provide guidance and share best practices and (2) assessments of component policies and processes for managing acquisitions. DHS has also continued to make progress addressing the outcome on validating required acquisition documents in a timely manner. Specifically, the Under Secretary for Management provided direction to the components on submitting the outstanding documentation. We will continue to assess DHS’s progress on this outcome. In addition, DHS has conducted staffing analysis reports that quantify gaps in acquisition personnel by position. Moving forward, DHS must develop and implement a plan for filling key vacant positions identified through its staffing analysis reports. Further, DHS should continue to improve its acquisition program management by effectively using the Joint Requirements Council, which it reinstated in June 2014, to identify and eliminate any unintended redundancies in program requirements, and by demonstrating that major acquisition programs are on track to achieve their cost, schedule, and capability goals. IT management. DHS has fully addressed 3 of the 6 IT management outcomes, mostly addressed another 2, and partially addressed the remaining 1. For example, DHS has finalized a directive to establish its tiered governance and portfolio management structure for overseeing and managing its IT investments, and annually reviews each of its portfolios and the associated investments to determine the most efficient allocation of resources within each of the portfolios. The department has also made progress in implementing its IT Strategic Human Capital Plan for fiscal years 2010 through 2012. However, in January 2015 DHS shifted its IT paradigm from acquiring assets to acquiring services, and acting as a service broker (i.e., an intermediary between the purchaser of a service and the seller of that service). According to DHS officials in May 2015, this paradigm change will require a major transition in the skill sets of DHS’s IT workforce, as well as the hiring, training, and managing of staff with those new skill sets; as such, this effort will need to be closely managed in order to succeed. Additionally, we found that DHS continues to take steps to enhance its information security program. However, while the department made progress in remediating findings from the previous year, in November 2015, the department’s financial statement auditor designated deficiencies in IT systems controls as a material weakness for financial reporting purposes for the 12th consecutive year. This designation was due to flaws in security controls such as those for access controls, configuration management, segregation of duties, and contingency planning. Thus, DHS needs to remediate this longstanding material weakness. Information security remains a major management challenge for the department. Financial management. DHS has fully addressed 2 financial management outcomes, partially addressed 3, and initiated 3. Most notably, DHS received a clean audit opinion on its financial statements for 3 consecutive years, fiscal years 2013, 2014, and 2015, fully addressing 2 outcomes. In addition, in November 2015, DHS’s financial statement auditors reported that one of four material weaknesses in its internal controls over financial reporting had been reduced to a significant deficiency in connection with its remediation efforts. However, the remaining material weaknesses reported by DHS auditors continue to hamper DHS’s ability to establish effective internal control over financial reporting and comply with financial management system requirements. DHS continues to make progress on three multi-year projects to modernize selected components’ financial management systems. According to its January 2016 strategy update, DHS has made the most progress on its U.S. Coast Guard modernization project; whereas additional planning and discovery efforts need to be completed on its projects to modernize Federal Emergency Management Agency and U.S. Immigration and Customs Enforcement financial management systems before DHS will be in a position to implement modernized solutions for these components and their customers. Without sound controls and systems, DHS faces long-term challenges in sustaining a clean audit opinion on its financial statements and in obtaining and sustaining a clean opinion on its internal control over financial reporting, and ensuring its financial management systems generate reliable, useful, and timely information for day-to-day decision making as a routine business operation. Human capital management. DHS has fully addressed 1 human capital management outcome, mostly addressed 3, and partially addressed the remaining 3. For example, the Secretary of Homeland Security signed a human capital strategic plan in 2011 that DHS has since made sustained progress in implementing, fully addressing this outcome. DHS also has actions under way to identify current and future human capital needs, an outcome it mostly addressed since our 2015 report due to progress implementing its Workforce Planning Model. However, DHS has considerable work ahead to improve employee morale. For example, the Office of Personnel Management’s 2015 Federal Employee Viewpoint Survey data showed that DHS ranked last among 37 large federal agencies in all four dimensions of the survey’s index for human capital accountability and assessment. Further, the survey data showed that DHS’s scores continued to decrease in three index dimensions (job satisfaction, leadership and knowledge management, and results-oriented performance culture) and remained constant in the fourth dimension (talent management). DHS has developed plans for addressing its employee satisfaction problems, but as we previously recommended, DHS needs to continue to improve its root-cause analysis efforts related to these plans. DHS has also developed and implemented mechanisms to assess training programs but could take additional actions. For example, in September 2014, we found that DHS had implemented component- specific and department-wide training programs and that the five DHS components in our review all had documented processes to evaluate their training programs. However, to fully achieve this outcome, DHS also needs to, among other things, issue department-wide policies on training and development and strengthen its learning management capabilities. In February 2016, we reported that DHS had initiated the Human Resources Information Technology (HRIT) investment in 2003 to address issues presented by its human resource environment, which with respect to learning management, included limitations resulting from nine disparate learning management systems that did not exchange information. DHS established the Performance and Learning Management System (PALMS) program to provide a system that will consolidate DHS’s nine existing learning management systems into one system and enable comprehensive training reporting and analysis across the department, among other things. However, in our February 2016 report we found that selected PALMS capabilities had been deployed to headquarters and two components, but full implementation at four components was not planned, leaving uncertainty about whether the PALMS system would be used enterprise-wide to accomplish these goals. We further found that DHS did not fully implement effective acquisition management practices and therefore was limited in monitoring and overseeing the implementation of PALMS and ensuring that the department obtains a system that improves its learning management weaknesses, reduces duplication, and delivers within cost and schedule commitments. We made 14 recommendations to DHS to, among other things, address HRIT’s poor progress and ineffective management. DHS concurred with all 14 recommendations and provided estimated completion dates for implementing each of them. Management integration. DHS has sustained its progress in fully addressing 3 of 4 outcomes we identified and they agreed are key to the department’s management integration efforts. For example, in January 2011, DHS issued an initial action plan to guide its management integration efforts—the Integrated Strategy for High Risk Management. Since then, DHS has generally made improvements to the strategy with each update based on feedback we provided. DHS has also shown important progress in addressing the last and most significant management integration outcome—to implement actions and outcomes in each management area to develop consistent or consolidated processes and systems within and across its management functional areas—but, as we concluded in our 2015 update, considerable work remains. For example, the Secretary’s April 2014 Strengthening Departmental Unity of Effort memorandum highlighted a number of initiatives designed to allow the department to operate in a more integrated fashion. Further, in its January 2016 update, DHS reported that in August 2015 the Under Secretary for Management identified four integrated priority areas to bring focus to strengthening integration among the department’s management functions. According to DHS’s January 2016 update, these priorities— which include, for example, strengthening resource allocation and reporting ability and developing and deploying secure technology solutions—each have stretch goals and milestones that are monitored by integrated teams led by senior DHS officials. However, given that these management integration initiatives are in the early stages of implementation, it is too early to assess their impact. We concluded in our 2015 report and continue to believe that in the coming years, DHS needs to continue implementing its Integrated Strategy for High Risk Management and show measurable, sustainable progress in implementing its key management initiatives and corrective actions and achieving outcomes. In doing so, it will be important for DHS to maintain its current level of top leadership support and sustained commitment to ensure continued progress in executing its corrective actions through completion; continue to implement its plan for addressing this high-risk area and periodically report its progress to us and Congress; identify and work to mitigate any resource gaps and prioritize initiatives as needed to ensure it can implement and sustain its corrective actions; closely track and independently validate the effectiveness and sustainability of its corrective actions and make midcourse adjustments as needed; and make continued progress in achieving the 20 actions and outcomes it has not fully addressed and demonstrate that systems, personnel, and policies are in place to ensure that results can be sustained over time. We will continue to monitor DHS’s efforts in this high-risk area to determine if the actions and outcomes are achieved and sustained over the long term. Each year, DHS invests billions of dollars in its major acquisition programs to help execute its many critical missions. In fiscal year 2015 alone, DHS reported that it planned to spend approximately $7.2 billion on these acquisition programs, and the department expects it will ultimately invest more than $180 billion in them. DHS and its underlying components are acquiring systems to help secure the border, increase marine safety, screen travelers, enhance cyber security, improve disaster response, and execute a wide variety of other operations. Each of DHS’s major acquisition programs generally costs $300 million or more and spans multiple years. To help manage these programs, DHS has established an acquisition management policy that we found in September 2012 to be generally sound, in that it reflects key program management practices. However, we also have found that DHS has lacked discipline in managing its programs according to this policy, and that many acquisition programs have faced a variety of challenges. These challenges can contribute to poor acquisition outcomes, such as cost increases or the risk of end users—such as border patrol agents or first responders in a disaster— receiving technologies that do not work as expected. Reflecting the shortcomings we have identified since 2005, we have made 64 recommendations to DHS on acquisition management issues; 42 of which have been implemented. Our findings also formed the basis of the five acquisition management outcomes identified in the high risk list, which DHS continues to address. As detailed below, DHS has fully addressed one outcome, mostly addressed two outcomes, partially addressed one outcome, and initiated efforts to address the final outcome. Outcome 1: Validate required acquisition documents in a timely manner (mostly addressed); Outcome 2: Establish effective component level acquisition capability (fully addressed); Outcome 3: Establish and effectively operate the Joint Requirements Council (partially addressed); Outcome 4: Assess and address whether appropriate numbers of trained acquisition personnel are in place at the department component levels (mostly addressed); and Outcome 5: Demonstrate that major acquisition programs are on track to achieve their cost, schedule, and capability goals (initiated). In March 2005, we reported that a lack of clear accountability was hampering DHS’s efforts to integrate the acquisition functions of its numerous organizations into an effective whole. We found that DHS remained a collection of disparate organizations, many of which were performing functions with insufficient oversight, giving rise to an environment rife with challenges. For example, the Chief Procurement Officer had been delegated the responsibility to manage, administer, and oversee all acquisition activity across DHS, but in practice enforcement of these activities was spread throughout the department with unclear accountability. We concluded that unless DHS’s top leaders addressed these challenges, the department was at risk of continuing to exist with a fragmented acquisition organization that provided stopgap, ad hoc solutions. In November 2008, we found that billions invested in major acquisition programs continued to lack appropriate oversight. While the acquisition review process DHS had in place at the time of our review called for executive decision making at key points in an acquisition program’s life cycle, the process had not provided the oversight needed to identify and address cost, schedule, and performance problems in its major investments. Poor implementation of the process was evidenced by the number of programs that did not adhere to the department’s acquisition review policy—of DHS’s 48 major programs requiring milestone and annual reviews, 45 were not assessed in accordance with this policy. Furthermore, at least 14 of these programs had reported cost growth, schedule slips, or performance shortfalls. We found that poor implementation was largely the result of DHS’s failure to ensure that the department’s major acquisition decision-making bodies effectively carried out their oversight responsibilities and had the resources to do so. For example, although a Joint Requirements Council had been responsible for managing acquisition investment portfolios and validating requirements, the council had not met since 2006. We also found in November 2008 that acquisition program budget decisions had been made in the absence of required oversight reviews and, as a result, DHS could not ensure that annual funding decisions for its major investments made the best use of resources and addressed mission needs. We found almost a third of DHS’s major investments received funding without having validated mission needs and requirements—which confirm a need is justified—and two-thirds did not have required life-cycle cost estimates. Without validated requirements, life-cycle cost estimates, and regular portfolio reviews, DHS cannot ensure that its investment decisions are appropriate and will ultimately address capability gaps. In July 2008, 15 of the 57 DHS major investments we reviewed were designated by the Office of Management and Budget as poorly planned and by DHS as poorly performing. Among other things, we recommended that DHS reinstate the Joint Requirements Council or establish another departmental joint requirements oversight board to review and approve acquisition requirements and assess potential duplication of effort. In November 2008, DHS issued the initial version of Acquisition Management Directive 102-01—the policy in effect today—in an effort to establish an acquisition management system that effectively provides required capability to operators in support of the department’s missions. DHS Acquisition Management Directive 102-01 and DHS Instruction Manual 102-01-001, which includes 12 appendixes, establish the department’s policies and processes for managing these major acquisition programs. In September 2012 we found that these documents reflect many key program management practices that could help mitigate program risks. Table 3 presents our 2012 assessment of the policy. However, in September 2012 we also found that DHS had not consistently met the policy’s requirements. At the time of our 2012 review, the department had only verified that 4 of 66 programs documented all of the critical knowledge the policy requires to proceed with acquisition activities. See figure 1. In 2012, most major programs lacked reliable cost estimates, realistic schedules, and agreed-upon baseline objectives, limiting DHS leadership’s ability to effectively manage those programs and provide information to Congress. As a result, we were only able to gain insight into the magnitude of the cost growth for 16 of 77 programs we reviewed, but we found that their aggregate cost estimates had increased by 166 percent. Additionally, we found that nearly all of the DHS program managers we surveyed in 2012 reported their programs had experienced significant challenges. Sixty-eight of the 71 respondents reported they experienced funding instability, faced workforce shortfalls, or their planned capabilities changed after initiation, and most survey respondents reported a combination of these challenges. DHS has concurred with and presented plans for addressing all of the acquisition management recommendations that we have addressed to the Secretary since September 2012, the implementation of which will enhance acquisition management. For example, in April 2014, we found that the department’s Acquisition Review Board rarely directed programs to make tradeoffs for affordability. We recommended that this board assess program-specific tradeoffs at all of its meetings, particularly in light of the fact that the department had identified a 30-percent gap between the 5-year funding needed for its major acquisitions and the actual resources expected to be available. DHS has since implemented this recommendation. Furthermore, in reviews completed in March, April, and October of 2015, we found that DHS had taken steps to improve oversight of its major acquisition programs and that it had begun to implement its 2008 acquisition policy more consistently. Many of DHS’s recent actions correspond to the high risk outcomes, as shown in the table 4. DHS faced a unique challenge when it was formed—merging 22 separate entities into a single department—which included integrating the acquisition functions of its numerous organizations into an effective whole. The department is to be commended for the efforts its leadership has taken to address the acquisition management shortcomings we have identified over the past 10 years. However, in our April 2015 review, we continued to find enduring challenges to acquisition programs, such as: Staffing shortfalls—DHS reported that 21 of the 22 programs we reviewed faced program office workforce shortfalls pertaining to such positions as program managers, systems engineers, and logisticians. Program funding gaps—we found that 11 of the 22 programs we reviewed faced funding gaps (projected funding vs. estimated costs) of 10 percent or greater, including five programs that faced gaps of 30 percent or greater. Inconsistencies in how components were implementing the department’s acquisition management policy—for example, we found that the funding plans for Coast Guard programs continued to be incomplete, in that they did not account for all of the operations and maintenance funding the Coast Guard planned to allocate to its major acquisition programs. These gaps in information reduce the value of the funding plans presented to Congress and obscure the affordability of Coast Guard acquisition programs. Further, as we noted in our most recent high risk update, the department has additional work to do to fully address the remaining acquisition management outcomes. To address these issues, we have, among other things, recommended that DHS assess program-specific affordability tradeoffs at all acquisition review meetings and ensure that funding plans submitted to Congress are comprehensive and clearly account for all operations and maintenance funding that DHS plans to allocate to its programs. DHS concurred with these recommendations and is taking actions to implement them. For example, DHS established that department leadership would, during all acquisition program reviews, specifically address affordability issues and make explicit tradeoffs as necessary, and DHS officials told us that DHS leadership is discussing affordability at all program reviews to help ensure adequate funding exists. Additionally, DHS has continued to obtain department-level approval for program baselines as needed, and has initiated efforts to improve the quality of acquisition data reported by components. To ensure that recent efforts are sustained, the department must continue to implement its sound acquisition policy consistently and effectively across all components. We continue to track DHS’s progress through ongoing reviews of the Joint Requirements Council, non-major program acquisitions, and our annual assessment of major acquisition programs, in addition to our high risk updates. Chairman Johnson, Ranking Member Carper, and members of the committee, this completes our prepared statement. We would be happy to respond to any questions you may have at this time. For further information about this testimony, please contact Rebecca Gambler at (202) 512-8777 or [email protected] or Michele Mackin at (202)-512-4841 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Larry Crosland, Jennifer Kamara, James Kernen, Emily Kuhn, Elizabeth Luke, Valerie Freeman, David Lysy, Taylor Matheson, Shannin O’Neill, Lindsay Taylor, Nathan Tranquilli, Katherine Trimble, and Shaunyce Wallace. Key contributors for the previous work that this testimony is based on are listed in each product. Homeland Security: Weak Oversight of Human Resources Information Technology Investment Needs Considerable Improvement. GAO-16- 407T. Washington, D.C.: February 25, 2016. Homeland Security: Oversight of Neglected Human Resources Information Technology Investment Is Needed. GAO-16-253. Washington, D.C.: February 11, 2016. Immigration Benefits System: Better Informed Decision Making Needed on Transformation Program. GAO-15-415. Washington, D.C.: May 18, 2015. Homeland Security Acquisitions: Major Program Assessments Reveal Actions Needed to Improve Accountability. GAO-15-171SP. Washington, D.C.: April 22, 2015. Homeland Security Acquisitions: DHS Should Better Define Oversight Roles and Improve Program Reporting to Congress. GAO-15-292. Washington, D.C.: March 12, 2015. Department of Homeland Security: Progress Made, but More Work Remains in Strengthening Management Functions. GAO-15-388T. Washington, D.C.: February 26, 2015. DHS Training: Improved Documentation, Resource Tracking, and Performance Measurement Could Strengthen Efforts. GAO-14-688. Washington, D.C.: September 10, 2014. Department of Homeland Security: Progress Made; Significant Work Remains in Addressing High-Risk Areas. GAO-14-532T. Washington, D.C.: May 7, 2014. Homeland Security Acquisitions: DHS Could Better Manage Its Portfolio to Address Funding Gaps and Improve Communications with Congress. GAO-14-332. Washington, D.C.: April 17, 2014. Department of Homeland Security: DHS’s Efforts to Improve Employee Morale and Fill Senior Leadership Vacancies. GAO-14-228T. Washington, D.C.: December 12, 2013. DHS Financial Management: Continued Effort Needed to Address Internal Control and System Challenges. GAO-14-106T. Washington, D.C.: November 15, 2013. Information Technology: Additional OMB and Agency Actions Are Needed to Achieve Portfolio Savings. GAO-14-65. Washington, D.C.: November 6, 2013. DHS Financial Management: Additional Efforts Needed to Resolve Deficiencies in Internal Controls and Financial Management Systems. GAO-13-561. Washington, D.C.: September 30, 2013. Information Technology: Additional Executive Review Sessions Needed to Address Troubled Projects. GAO-13-524. Washington, D.C.: June 13, 2013. Department of Homeland Security: Taking Further Action to Better Determine Causes of Morale Problems Would Assist in Targeting Action Plans. GAO-12-940. Washington, D.C.: September 28, 2012. Homeland Security: DHS Requires More Disciplined Investment Management to Help Meet Mission Needs. GAO-12-833. Washington, D.C.: September 18, 2012. Department of Homeland Security: Billions Invested in Major Programs Lack Appropriate Oversight. GAO-09-29. Washington, D.C.: November 18, 2008. Homeland Security: Successes and Challenges in DHS’s Efforts to Create an Effective Acquisition Organization. GAO-05-179. Washington, D.C.: March 29, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 2003, GAO designated implementing and transforming DHS as high risk because the failure to address risks associated with transforming 22 agencies into one department could have serious consequences for U.S. national and economic security. While challenges remain, DHS has made considerable progress. As a result, in 2013 GAO narrowed the scope of the high-risk area to focus on strengthening and integrating DHS management functions (human capital, acquisition, financial, and information technology). This statement discusses DHS's progress and actions remaining in addressing these functions with a focus on acquisition management. In fiscal year 2015 alone, DHS reported that it planned to spend approximately $7.2 billion on its major acquisition programs to help execute its many critical missions. This statement is based on GAO's 2015 high-risk update, GAO products from 2005 through 2016, and selected updates from ongoing work. To conduct past and ongoing work we reviewed key documents such as DHS strategies and interviewed agency officials. The Department of Homeland Security's (DHS) efforts to strengthen and integrate its management functions have resulted in it meeting three and partially meeting two of GAO's criteria for removal from the high-risk list (see table). For example, DHS has established a plan for addressing the high-risk area and a framework for monitoring its progress in implementing the plan. However, DHS needs to show additional results in other areas, including demonstrating the ability to achieve sustained progress across 30 outcomes that GAO identified and DHS agreed were needed to address the high-risk area. As of March 2016, DHS had fully addressed 10 of these outcomes but work remained in 20. GAO has reported on DHS's acquisition management for over 10 years. The department has struggled to effectively manage its major programs, including ensuring that all major acquisitions had approved baselines and that they were affordable. GAO has noted significant progress in recent reviews (see table). This progress is largely attributable to sustained senior leadership attention. Source: GAO analysis of DHS documents, interviews, and prior GAO reports. | GAO-16-507T To ensure that recent efforts are sustained, the department must continue to implement its sound acquisition policy consistently and effectively across all components. GAO has made numerous recommendations in this regard, which DHS has concurred with and is taking actions to implement. This testimony contains no new recommendations. GAO has made about 2,400 recommendations to DHS since 2003 to strengthen management efforts, among other things. DHS has implemented more than 70 percent of these recommendations, including those related to acquisition management, and has actions under way to address others.
CMB was established by the Departments of Commerce, Justice, and State, the Judiciary, and the Related Agencies Appropriations Act, 1998.CMB’s function is to observe and monitor all aspects of the Bureau of the Census’ preparation and implementation of the 2000 Census. CMB is also to promote the 2000 Census by encouraging the public to provide full and timely responses to census questionnaires. The legislation establishing CMB did not define its status or, with a few exceptions, specify laws that would govern its operations. In our opinion, supported by substantial precedent on similarly established boards and commissions, CMB is an agency in the legislative branch and is subject to laws generally applicable to the legislative branch except to the extent that provisions of law provide otherwise. See appendix III for further discussion. CMB held its first meeting on June 3, 1998, and thereafter began disbursing funds by hiring staff, locating and renovating office space, obtaining equipment and furniture, and creating a new organization from the ground up. The board consists of two members appointed by the Speaker of the House and two members by the Senate Majority Leader (the congressional CMB) and four members appointed by the President (the presidential CMB), with each side having a co-chairman. Board members are not entitled to pay for serving on the board but are entitled to reimbursement for travel expenses, including per diem when on official board business. The congressional and presidential CMB hired an executive director and staff and contracted for legal, consultant, and other services. CMB met with Bureau of the Census officials in Suitland, Maryland, and visited many of the local census offices, which are in every congressional district in the United States. In addition to reviewing bureau plans to conduct the census and efforts to improve enumeration techniques, CMB held joint hearings in Washington and across the country. CMB obtained testimony from Bureau of the Census officials, state and local officials, and community leaders to record their observations regarding Census 2000 efforts in their areas. In January 1999, the Supreme Court held that existing law did not authorize the use of statistical sampling for apportionment purposes.Subsequently, CMB focused on an outreach program with state and local governments and community groups to identify concerns and obstacles in obtaining an accurate and complete census. A particular concern is the widely acknowledged historical undercounting of rural populations and inner city minorities. CMB findings are presented in periodic reports to the Congress, of which six have been issued, the most recent on April 1, 2000. CMB received appropriations of $4 million for fiscal year 1998, $4 million for fiscal year 1999, and $3.5 million for fiscal year 2000. Each appropriation is available until expended. The CMB board allocated the congressional and presidential CMB $1.5 million each annually, with the balance allocated to a joint account to pay common expenses. As of March 31, 2000, the board employed 17 congressional and 14 presidential CMB staff. In addition, CMB legislation provides that it is considered a committee of the Congress for purposes of having CMB costs relating to printing and binding paid from the Congressional Printing and Binding appropriation. CMB entered into an interagency agreement with the Government Printing Office (GPO) for GPO to pay CMB’s bills and provide payroll, procurement, and travel services. GPO provides a monthly report of CMB disbursements for the congressional, presidential, and joint accounts. GPO charges $1,000 annually to each side for payroll processing and a fee of 6 percent of noninventory purchases. However, this agreement did not relieve the congressional and presidential CMB of the responsibility to maintain an adequate system of internal controls, approve disbursements in accordance with established polices and procedures, and maintain documentation to support disbursements. CMB legislation did not require (1) annual financial statement or internal control audits or (2) inspector general oversight to conduct audits and investigations as needed or requested. The General Services Administration (GSA), in coordination with the Department of Commerce, was responsible for providing office space for CMB at the Bureau of the Census complex in Suitland, Maryland. CMB’s enabling legislation provides that it shall cease to exist on September 30, 2001. The following are our findings for the seven specific matters, which are presented as questions with responses. 1. Were presidential CMB funds used to print reports for the 1998 World Exposition? Our review of presidential CMB disbursements over $200 did not identify any documented evidence that presidential CMB funds were used to pay for printing reports for the 1998 World Exposition in Lisbon, Portugal. The former presidential CMB co-chairman and the presidential CMB executive director had been former U.S. pavilion officials at the 1998 World Exposition immediately before coming to CMB. The presidential CMB executive director provided invoices from a commercial business that printed the 1998 World Exposition reports at a total cost of $4,538. Initially, the presidential CMB executive director paid $1,369 of the invoices by personal check. He was later reimbursed for this amount by personal check of the former presidential CMB co-chairman, who paid the remaining $3,169 by personal check directly to the printer. 2. Did congressional CMB videotapes have a narrow political distribution and did internal controls exist over the use of copyrighted material? We found no documented evidence that congressional CMB videotapes produced had a narrow political distribution. We identified nine video productions with over 4,300 copies that were widely distributed to nonprofit organizations, community groups, and units of state and local government. This wide distribution included about 1,200 copies of a video on census undercounts entitled “On Every Street” featuring the congressional CMB co- chairman. While some copies of this tape were labeled “For Republican Mayors,” the distribution was far wider and included community groups and others noted above. We found that no CMB controls existed over the use of copyrighted material because CMB had not contemplated such a use. However, the congressional CMB did obtain a written release from a recording company for the use of background music for a videotape production. This release, however, was obtained after the fact. 3. Were congressional and presidential CMB travel funds used in connection with political events? We found no documented evidence that CMB funds were used for political travel. Through March 31, 2000, we found the following: The congressional CMB co-chairman, who lives in Ohio, took 36 trips, including 15 to CMB offices in Suitland, Maryland, for CMB business. The remaining 21 trips to various locations throughout the country were to attend board meetings, conduct field hearings, attend conferences, meet with local government and community leaders, and participate in other CMB events. The former presidential CMB co-chairman took two trips, one to South Carolina and another to California, to conduct CMB field hearings. We noted that he did not claim reimbursement for all presidential CMB travel taken, although entitled to reimbursement. CMB staff took 2 trips to New Hampshire and 10 trips to Arizona. All trips had a stated CMB purpose to meet with local officials or to attend conferences. There was no travel to Iowa. During the 2 months before the early presidential primaries in these three states, a congressional CMB employee traveled to Arizona to meet with area complete-count committees several weeks before the Arizona democratic primary. 4. Were four presidential CMB contracts for studies on census undercounting properly procured? We found no evidence to indicate that four presidential CMB contracts to conduct studies on census undercounting were improperly procured. The presidential CMB followed the joint CMB procurement policy for four separate contracts totaling $251,000 for studies of census undercounting. The four separate presidential CMB contracts were as follows: In the summer of 1999, the presidential CMB prepared a request for proposal for an analysis of how a 2000 census undercount could affect federal funding allocations to the 50 states over the next decade. The cost of this effort was estimated to be under $150,000, which, under the CMB joint policy for procurement of temporary or intermittent services, requires that three bids be obtained with an effort to award contracts to small businesses. Due to the type of work required, three nationally known consultants were contacted. Two firms submitted written proposals, and the lowest bid, $140,000, was selected. This work was completed in November 1999 with a briefing. The contractor proposed a second analysis to estimate funding distortions at a cost of $50,000, which was under the $75,000 CMB policy for a sole-source procurement. The cost was later amended and increased to $70,000 because the Bureau of the Census released new data requiring recalculations. A written report was delivered by December 31, 1999. The contractor then proposed a third analysis of funding distortions at a cost of $30,000, which CMB agreed would be efficient to perform based on work already done. For efficiency, the results of this effort were incorporated into the written report of the second analysis. Finally, the contractor provided an Internet version of the study at a cost of $6,000 and printed copies at a cost of $5,000. There is no indication that CMB anticipated having the second and third analyses performed when the first analysis was competitively awarded. 5. Were protected census data accessed by two former congressional CMB employees? We found no evidence that two former congressional CMB employees accessed confidential census data protected by Title 13, U.S. Code.All CMB employees take an oath not to disclose Title 13 information. The two former employees involved stated that their former jobs as CMB counsel and director of outreach, respectively, gave them no reason to access Title 13 data and that they had not seen any such data during their time at CMB. We obtained Bureau of the Census control logs, maintained in date order, of all CMB requests for documents including Title 13 protected data. Bureau of the Census officials identified only the two following occasions in which Title 13 data were provided to the congressional CMB before the two staff left in September 1999 to work for the Republican National Committee on redistricting issues. On December 11, 1998, the congressional CMB requested a Post Enumeration Study (PES) of the 1990 Census, which included Title 13 data. However, the files were incorrect when initially provided in April 1999, and the congressional CMB immediately returned the data for correction. The corrected files were provided to the congressional CMB in November 1999, 2 months after the two employees left. On June 24, 1999, the congressional CMB requested Title 13 data for Integrated Coverage Measurement and PES operations for six blocks in Sacramento, California, and Columbia, South Carolina. The data were provided to the congressional CMB in August 1999, 1 month before the two employees left. These data were in preparation for a dress rehearsal comparing the 2000 Census to the 1990 Census, which according to congressional CMB officials did not involve the two former employees. 6. Were questions about political parties asked in a congressional CMB contractor focus study? We found that 2 of 27 questions in a congressional CMB contractor focus group study mentioned political parties. In April 1999, the congressional CMB hired a contractor to conduct focus groups on hard-to-count minorities in the nation’s largest public housing project in Chicago, Illinois. Two groups of 13 residents by gender were asked the same 27 questions over 2 hours to learn their attitudes and perceptions about the census. Both sessions were videotaped, and each resident received $50 to participate. “There are some members of the U.S. Congress who believe using people like you to help with the census count is the better way. Tell me who do you think these people might be: Democrat or Republican members of Congress?” “Now if I say to you the people who believe it’s better to employ you to help with the census are Republicans, what do you think?” According to congressional CMB officials, these two questions were asked to determine how minority views of political parties affected current efforts by both parties to improve census undercounts. 7. Was there a verbal confrontation by a congressional CMB contractor, and how was the matter subsequently resolved? We found some evidence of a verbal confrontation between a congressional CMB contractor and Bureau of the Census employees in a local census office (LCO). CMB employees and contractors made numerous LCO visits as part of the CMB mission to monitor the 2000 Census and to identify potential problems and issues to bring to the attention of the Bureau of the Census. A congressional CMB contractor met with LCO officials during a March 22, 2000, visit to obtain information on census activities. In his report, the contractor, a former criminal investigator, observed that during a 3-hour interview, the local manager appeared evasive in responding to his questions, with repeated looks at the two area managers who were present. The contractor stated that he asked the managers to provide honest answers to his questions and, if they did not, it may not look good later. Words to that effect were also confirmed by a detailed e-mail of the visit by the LCO area manager, who referred to the 3 hours of detailed questions as “a deposition that was relatively painless, but extremely taxing.” The congressional CMB and the contractor mutually terminated their agreement effective 14 days after the visit due to a variety of reasons, including the LCO visit. Shortly thereafter, the congressional CMB adopted a conduct policy for field visits. LCO managers prepare brief reports on CMB visits and the Bureau of the Census chief of field decennial oversight and communications indicated that there were no further incidents regarding CMB visits. The vast majority of CMB disbursements were supported and related to official business.However, we could not evaluate about $119,000 of disbursements, all but about $1,000 of which were the presidential CMB’s. This was because the presidential CMB could not provide adequate supporting documentation. CMB disbursed about $7.4 million from June 1998 through March 31, 2000, from its appropriated funds plus another $0.5 million from the Congressional Printing and Binding appropriation. As shown in table 1, about half of these disbursements were for personnel compensation and benefits, with the balance for services and travel. The congressional and presidential CMB each received $1.5 million annually and spent their appropriations differently. The congressional CMB employed five field staff and, accordingly, spent more than the presidential CMB on personnel. It also spent more on travel because of an outreach program with local census offices, state and local governments, and community groups. The presidential CMB spent more on contractors, including about $251,000 for four studies on census undercounting. The various financial policies, procedures, and practices CMB adopted did not always form a system of internal control adequate for guiding operations, safeguarding assets, and assuring compliance with applicable laws and regulations. We found specific internal control weaknesses in three main areas: travel, personnel, and the procurement of services. Some of these weaknesses allowed a number of inappropriate and wasteful practices, and two policies were inconsistent with federal law. CMB is subject to the Federal Travel Regulations applicable to federal employees of most agencies.We found several significant departures from these regulations particularly for the presidential CMB that included inappropriate use of government credit cards, ineligible travel expenses (such as charges for family member lodging), and no written authorizations to travel. These problems resulted in part from ineffective or missing internal controls over travel. Internal controls over the use, payment, and monitoring of credit cards for official government travel were absent, leading to some abuses by both sides of CMB. Based on CMB requisitions, GPO issued these cards to selected CMB board members and employees, who are personally responsible for their prompt payment and use for official purposes under the master agreement between the credit card company and GSA which represents the federal government. The joint CMB travel policy specifies how government travel credit cards are to be used and that delinquencies are to be monitored by the executive directors. We found that several employees did not pay amounts when due, resulting in accounts up to 7 months in arrears. Payment delays can cost the government revenue under the master agreement.Cardholder accounts for one congressional CMB employee and one presidential CMB employee totaling over $5,000 were written off by the credit card company even though, from the start of the CMB through March 31, 2000, the two employees were each reimbursed over $11,000 for travel expenses. While the congressional CMB employee’s balance remains unpaid and she is in the process of establishing a repayment agreement, the presidential CMB employee paid her entire amount due by April 2000. Government credit cards are to be used only for expenses incurred in conjunction with official travel, a restriction that was included in the CMB joint travel policy. However, we noted over $7,500 of improper charges by two presidential CMB employees. This included the executive director from August 1998 into January 1999 and the same presidential CMB employee whose charges in June 1999 were later written off by the credit card company. Both employees used the card for personal expenses such as (1) hotel rooms while on annual leave, (2) local restaurants, (3) clothing stores, (4) mail order businesses, or (5) amusement park admission. On January 6, 1999, the former presidential CMB director of operations noted the abuses listed above and informed the presidential CMB executive director. On January 8, 1999, the presidential CMB executive director’s credit card was canceled. From July 1999 through July 2000, the presidential CMB executive director improperly used three other employees’ government credit cards for CMB business charges and for about $4,500 for personal travel and local restaurant bills. The presidential CMB executive director promptly paid these charges and the charges to his own credit cards and did not seek reimbursement from CMB for personal charges. However, this use violates the cardholder agreements and CMB’s joint travel policy, which states that the card may be used only by the cardholder. We found several other ineligible travel charges relating to the presidential CMB. Lodging costs were left on a government travel credit card of a presidential CMB employee who had reserved hotel rooms on behalf of a presidential CMB board member and his staff and family for two trips to Washington, D.C., in March 2000. These costs, totaling $5,882, were paid for by CMB and included $1,708 for six nights for a second hotel room and other charges for the board member’s family. In addition, for the same trip, this board member erroneously received a separate reimbursement of $1,346 in May of 2000 from CMB (approved by the executive director) for the hotel and telephone calls that had been paid for by the CMB employee who made the reservations. CMB reimbursed that employee and, after we discovered these errors, billed the board member. On August 28, 2000, CMB was paid for both the erroneous reimbursement and family charges. According to CMB and the board member’s staff, these errors were the result of a misunderstanding, and the board member was not aware of the errors. In two instances, a presidential CMB employee charged hotel restaurant bills to her room and obtained reimbursement for the entire hotel bill in addition to requesting full per diem for meals. In another instance, this same employee paid for a meal for 10 individuals and charged the cost to her hotel bill. The employee obtained reimbursement for the entire hotel bill and also requested a separate reimbursement for the meal. The above errors resulted in an overpayment of $402 to the employee, which we brought to the attention of the presidential CMB. The employee reimbursed CMB with two separate checks in August 2000. Inappropriate usage of credit cards and an ineffective review of these disbursements contributed to these ineligible expenses not being detected. For the presidential CMB, we found no evidence of authority to travel. According to the presidential CMB executive director, such approvals were oral. This practice is contrary to the CMB joint travel policy which requires that a written memo or CMB travel order (1) be issued before any expenses are incurred, (2) specify the points of travel, purpose, and estimated costs, and (3) be signed by the executive director. This practice is also contrary to the StandardsforInternalControlinthe FederalGovernment, which emphasizes that appropriate authorization of transactions is needed to ensure that only valid transactions occur. The lack of written authorization increases the risk of transactions that improperly commit government resources. Further, for the presidential CMB, we found two trips taken by individuals that were not CMB board members, employees, consultants, or witnesses that were not approved in advance in writing by the co-chairman as required by the joint travel policy. For the presidential CMB, we found that some travel advances were outstanding from 1 to 3 months before a trip was taken, and travel reimbursement requests were frequently submitted over 30 days after the trip was completed. Specifically The presidential CMB executive director received most of the advances on the presidential CMB after his government credit card was canceled. Two of the advances (one to the executive director for $750 and another to an employee for $500) were provided for trips that were canceled. In both instances, CMB was not repaid until 2 months later. In June 1999, another employee had a travel advance for $357 that exceeded trip expenses by $56 that had not been repaid until we noted it. The employee repaid CMB in August 2000. The congressional CMB provided two travel advances to individuals invited to attend an undercount summit. The congressional CMB did not require these two individuals to submit travel expense reports when their travel was completed, nor were they required to submit receipts. As a result, these two travel advances of $199 each remain unliquidated. The CMB joint travel policy provided no guidance on travel advances and their timely liquidation. If the advance is not paid back promptly and the trip was not taken, advances result in an interest free loan and a potential lack of accountability over assets, as discussed in Standards forInternalControlintheFederalGovernment. Additionally, CMB joint travel policy states that reimbursements requests “should be submitted as soon as possible following the completion of a trip.” However, the policy does not specify any maximum number of days for doing this, resulting in untimely execution of transactions and outdated financial information. For the congressional CMB, we found personal side trips that were inconsistent with the joint travel policy. While the policy allows personal side trips, employees may not use a government travel request form or government credit card to pay for transportation, and a government rate may not be not be used for travel expenses. Contrary to this policy, the congressional CMB allowed employees to make personal side trips at government rates while traveling on official business as long as they (1) prepared an analysis showing that the total airfare, including the personal side trip, did not exceed the airfare of the official trip and (2) obtained advance permission from the executive director. We identified one employee who took 27 personal side trips.Also, contrary to the CMB joint travel policy, a government credit card was improperly used to pay for the trips at a government rate. Further, the employee prepared the above analysis and obtained executive director approval in advance for only 11 of these trips. We reviewed evidence that several of the personal side trips for which advance approval was not obtained resulted in a cost of about $400 to the federal government. The CMB joint travel policy states that free travel, mileage, discounts, upgrades, and other travel promotional awards may be used at the discretion of the members and employees of CMB. While the policy states that official use is encouraged, it is unclear whether personal use of these awards is prohibited. Ample legal guidance concerning employee use of frequent flyer mileage credits, bonus tickets, and other similar promotional materials received for official government travel exists to establish that these types of travel awards belong to the federal government and may not be retained by the government employee for personal use. We recommend that the CMB board augment the joint travel policy to address the use of travel advances and specify that requests for travel reimbursements are to be submitted within a certain number of days after the trip is completed and amend the travel policy to provide that travel promotion items, including frequent-flyer mileage, accrue exclusively to the government. We recommend that both executive directors monitor government credit card usage and report delinquencies, as well as any personal use of the cards, to the co-chairmen as required by the joint travel policy. We recommend that the congressional CMB executive director enforce the CMB joint travel policy on personal side travel. We recommend that the presidential CMB executive director enforce the joint travel policy requiring a written memo or CMB travel order for all travel, including written approval by the co-chairman for travel by anyone other than a board member, employee, or witness. As discussed in its response to a draft of this report in appendix IV, CMB plans to implement all of our recommendations. CMB legislation provides that the board is exempt from certain provisions of federal personnel law governing appointments and pay.Therefore, CMB can hire whomever it wants and pay that person accordingly. However, this does not exempt CMB from other areas of federal personnel law, such as the provisions of the Annual and Sick Leave Act, applicable to most federal employees. We identified time and attendance abuses, unreconciled payroll and benefits, and a lack of employee evaluations. Although the congressional and presidential CMB adopted policies and procedures for personnel, we noted instances in which they were not followed or areas where internal controls were weak. For both sides, we found employees arriving late and leaving early, inaccurate annual leave accounting, and sick leave and other leave issues. In Part I of its policy manuals, CMB adopted official office hours of Monday through Friday from 8:30 a.m. to 5:30 p.m. with 1 hour for lunch, resulting in an 8-hour workday. However, during the 10 weeks we were at CMB, from June through August 2000, we observed and were told in at least five employee interviews that CMB personnel frequently arrived late and left early without charging leave. On many days, the building services staff opened the office for us at 8:30 a.m., and usually only a few employees remained at 5:30 p.m. For example, we observed one employee who arrived after 10 a.m. and left before 5 p.m. on many occasions. This practice, which results in less than 8 hours worked, violated CMB policy manuals and does not exemplify good management of human capital. A contributing factor to this problem is that CMB staff are not required to maintain time sheets, and all salaried employees are presumed to have worked 80 hours in a biweekly pay period unless they inform their supervisor otherwise. Time sheets would have greatly increased accountability. An employee’s signature on a time sheet attests that his or her time and attendance is correct, and a supervisor’s signature attests to a review and approval. We found weak internal controls and poor accounting by CMB regarding annual leave. The CMB jointly adopted the federal government policy on earning and taking annual leave in accordance with the Annual and Sick Leave Act.GPO records annual leave information that is provided monthly by CMB. However, we found that CMB was not reconciling its leave records with GPO. Prudent financial management practices call for an entity to reconcile its balances monthly. With timely reconciliations, the following differences could have been detected and corrected earlier. For congressional CMB employees, GPO records show charges of annual leave usage of three employees that were 24 hours (7 percent) less than that on CMB’s records, and charges of three employees were 40 hours (15 percent) more when compared to CMB’s records. For the presidential CMB, GPO records show charges of annual leave usage of 14 employees that were 566 hours (31 percent) less than those on CMB’s records, and charges of 3 employees that were 40 hours (13 percent) more when compared to CMB’s records. For many presidential CMB employees, we found long time lags between when annual leave was taken and when GPO records reflected that fact because CMB did not promptly provide information to GPO. Moreover, GPO provided an annual printout of leave balances for reconciling purposes, but we found the above errors had not been detected or corrected by CMB. Leave accounting problems contributed to errors in several final paychecks for employees departing CMB, including the following. A former congressional CMB employee, who had been advanced 4 hours of annual leave, left CMB without having $142 appropriately deducted from her final paycheck. Two former presidential CMB employees who had been advanced 28 hours and 12 hours of annual leave left CMB without having $470 and $258, respectively, appropriately deducted from their final pay, while another employee was not paid $633 for unused annual leave. The operating environment was conducive to taking but not charging annual leave. Although we could not determine the extent to which this was occurring, we did identify two instances when staff were on personal business and did not charge annual leave: For the congressional CMB, one employee had taken 27 personal side trips in connection with official travel (discussed earlier in this report); during these personal trips, there were 8 workdays for which no leave was recorded on the office records. Later, the CMB comptroller informed us that 7 of those workdays should have been recorded as personal leavedays and 1 day should have been charged as annual leave. For the presidential CMB, we found that the executive director was frequently out of town on non-CMB business that included 29 trips to Tampa, Florida. According to the presidential CMB executive director and other evidence obtained, these trips were primarily to care for a family member. For these trips, we identified 12-1/2 days that were not charged to leave. Later, the executive director represented to us that 8 days should have been recorded for family sick leave, 1 day for personal leave he awarded himself, and 3-1/2 days as annual leave. On September 20, 2000, the presidential CMB provided adjustments to GPO to correct the executive director’s leave records, leaving him with a negative leave balance of 4 hours. CMB adopted an unlimited sick leave policy that was not consistent with federal law. Under the Annual and Sick Leave Act, a federal employee is entitled to sick leave with pay that accrues on the basis of one-half day for each full biweekly pay period. Unused sick leave accumulates for future use. The statute establishing CMB does not exclude CMB from the Annual and Sick Leave Act. The act has long been applied to employees of temporary commissions in the legislative branch whose members are appointed by the President and members of Congress. Therefore, the executive directors and the employees they appoint are subject to the Annual and Sick Leave Act and its sick leave provisions. In addition, we found extensive use of sick leave in the presidential CMB. Its average recorded usage of sick leave for calendar year 1999 was 17.8 days, which is 4.8 more days than an employee accrues in a year. Although we could not determine the number of sick days, if any, that were not recorded, we were told in employee interviews that many presidential CMB staff members were frequently out sick. Two Extra Holidays. CMB granted two federal holidays in addition to those authorized by law. Federal law provides for 10 named annual federal holidays.Part II of the CMB policy manuals approved by the board on November 23, 1998, lists the federal holidays that CMB will observe. While that list includes the 10 holidays prescribed by federal law, it adds the day after Thanksgiving and the day before Christmas as paid federal holidays. Personal Leave. CMB adopted a policy allowing personal days off, or personal leave, at the discretion of each executive director, to compensate employees for extra hours worked.While this policy is reasonable in substance, we found several examples in which there was inadequate documentation over the granting of personal leave. One congressional CMB employee was awarded a total of 23 personal leave days during 1998 and 1999 as compensation for travel days occurring on weekends. However, as previously discussed (see “Annual Leave Accounting”), this individual had also taken 27 personal side trips while on CMB business travel and, as discussed below, had been inappropriately awarded military leave. Another congressional CMB employee was awarded a total of 18 personal leave days during 1998 and 1999. We found minimal documented justification for these two individuals to receive the substantial number of personal leave days that they were awarded. In addition, the presidential CMB executive director granted himself 7 days of personal leave during 1998 and did not justify in writing any corresponding time worked. This included December 30, 1998, when he called the CMB office from Florida while on non-CMB business to close the office for New Year’s Eve, thus granting himself an additional personal leave day rather than charging annual leave. Presidential CMB leave records did not show the granting of a personal day for December 31, 1998, for any employees. Military Leave. A congressional CMB employee, who was a member of the Naval Reserve, attended reserve unit inactive duty training before October 5, 1999, for which CMB granted him 8 days of paid military leave from his CMB position, as well as 2 personal days.Both CMB policy manuals provide for military leave by stating that staff members on active duty are allowed paid military leave that does not count against annual leave, which is consistent with federal law. However, since this duty was inactiveduty training, the employee was not entitled to military leave for these days and should have charged annual leave or leave without pay under CMB policy and federal law.Effective October 5, 1999, federal law was changed to allow inactive duty as an authorized use of military leave. CMB relied on GPO to correctly pay employees, calculate fringe benefits, and charge expenses incurred to CMB’s appropriation without comparing this activity to its own records. Prudent financial management practices call for an entity to reconcile its financial records with those of entities with whom they have financial relations to reduce the risk of errors. As we performed our audit procedures, we found uncorrected payroll errors. For example, GPO overcharged the congressional CMB about $300 of payroll for one employee and the presidential CMB $3,110 of payroll for five employees, although, in all cases, the employees were correctly paid. Charges for employee benefits were handled on an estimated aggregate basis throughout the year rather than calculating actual charges by pay period. GPO charged CMB an estimated rate of 25 percent of payroll based on standard percentages.While this appeared reasonable on an interim basis, we found no indication that the charges were validated at least annually to reflect the actual costs paid by GPO and that balances in the appropriation account were appropriately adjusted. Without reconciliation, CMB cannot be assured that it is not being undercharged or overcharged by GPO for employee benefits. We found that of the employees that had been employed at CMB for at least 1 year, 11 congressional CMB employees and 4 presidential CMB employees had not received written performance evaluations. The preparation of written performance evaluations would support merit pay increases and termination decisions. We recommend that the CMB board amend the unlimited sick leave policy to conform with federal law by providing that 4 hours accrue per biweekly pay period and amend its holiday policy to conform with federal law by eliminating the two extra holidays. We recommend that both executive directors enforce Part I of the policy manual that official office hours apply to all employees unless an alternate work schedule is approved in advance by the executive directors, reconcile CMB records of employee annual leave balances with GPO summary reports at least monthly, instruct staff to record all leave taken, reconcile CMB and GPO records on payroll and benefit costs on a regular basis, and prepare performance evaluations for all employees at least annually. We recommend that the presidential CMB executive director review sick leave and other leave taken to ensure that extensive use of sick leave is appropriate and documented. As discussed in its response to a draft of this report in appendix IV, CMB plans to implement all of our recommendations. As a legislative branch entity, CMB is exempt from most of the procurement rules that guide the executive branch, such as the Federal Acquisition Regulations. Nevertheless, as the StandardsforInternal ControlintheFederalGovernmentemphasizes, to account for and safeguard the use of federal resources, federal entities should have good internal controls, such as managerial approvals and authorizations, accountability for resources and records, and appropriate documentation of transactions. Although the congressional and presidential CMB adopted joint policies and procedures for the procurement of temporary and intermittent services,and separate policies and procedures on purchasing supplies, we noted instances in which they were not followed or areas in which internal controls were weak. These weaknesses resulted in wasteful expenses for telephone use, weak contract accounting, and lack of approvals to pay invoices exceeding certain dollar amounts. For the presidential CMB, we also noted some weaknesses in controls over property. Following are examples of wasteful expenses for telephone usage resulting from ineffective internal controls. The use of cell phones was extensive, with some individuals using 2,300 minutes a month, resulting in excess minutes over the monthly plan, at costs ranging from $353 to $548. Some cell phones with 500 to 1,000-minute plans costing over $50 monthly were not used at all. With better controls, CMB could have assigned these unused cell phones to employees who exceeded their monthly plan minutes. Cell phones and pagers were provided to all employees who requested them regardless of whether there was a valid business need. We found several other issues related to telecommunications that were specific to the presidential CMB, including the following. In practice, based on interviews with staff and our review of detailed invoices, unlimited personal telephone calls were allowed, with only three presidential CMB personnel providing reimbursement for personal calls. We noted numerous long distance and local personal calls on cell phones on weekends, holidays, and evenings. Prepaid telephone cards purchased for about $800 were not controlled by person or use. Extensive hotel telephone charges as high as $400 for one trip were made when a cell phone or prepaid telephone card could have been used. CMB contracting procedures were inadequate. Some contractors worked months without written contracts, particularly in 1998 when CMB began operations. The lack of a written contract with terms and conditions exposes CMB to potential contract disputes. We also noted instances in which payments exceeded the maximum amount of the contract or contractors were paid for services rendered after the contract period without a written modification. Although CMB used GPO to procure goods and services through September 30, 1998, CMB cited GPO procedures on contracts and purchase as cumbersome and subsequently conducted its own contracting. CMB did not consistently follow its policies on approvals to pay for purchases. CMB written procurement polices require its executive directors to approve payments exceeding $1,000 and $750 for the congressional and presidential CMB, respectively. For payments under these thresholds, the director of operations or finance for the congressional and presidential CMB, respectively, must approve the payments. For the congressional CMB, we found written evidence of executive director or designated staff approval for all but five disbursements totaling about $3,000. For the presidential CMB, we found written approval for all payments except for about $46,000 of the $118,000 of disbursements with inadequate support. Lack of written approvals increases the risk of paying amounts prematurely or paying amounts that should not have been paid at all. Because of weak internal controls over property, the presidential CMB was missing a laptop computer and three cell phones were indicated as lost, at a cost of about $2,800. The StandardsforInternalControlinthe FederalGovernmentspecifies that physical control over vulnerable assets must be established to ensure that they are secured and safeguarded. The lack of this control can lead to the risk of loss or unauthorized use of these assets. The presidential CMB did not always record property movement or transfers in its records, which contributed to its difficulty in determining who had what property. On April 18, 2000, GPO took a physical inventory of CMB property (excluding cell phones) and accounted for all recorded property except four laptop computers. After much research, the presidential CMB was able to account for three laptops. However, it was unable to account for the remaining laptop computer, which according to its records was assigned to a former employee. In addition, the presidential CMB executive director stated that for the three lost cell phones, he misplaced one, broke another, and the third was reported as stolen in the Tampa airport. We recommend that both executive directors improve contract accounting to require that contracts are promptly written, contract activity is monitored, and contracts are modified before contract amounts or dates are exceeded, adhere to CMB written procurement polices on approval of purchases and payments over established dollar limits, and evaluate office telecommunications use and adopt a written policy on cell phone use. We recommend that the presidential CMB executive director improve accountability over property by maintaining accurate records of equipment by GPO property tag and serial number, having all employees sign for equipment such as laptop computers, cell phones, and Palm Pilots, and conducting periodic physical inventories. As discussed by CMB in its response to a draft of this report in appendix IV, CMB plans to implement all of our recommendations. Our StandardsforInternalControlintheFederalGovernment emphasizes that a positive internal control environment provides discipline, structure, and a climate that forms a foundation for effective internal controls.Our work has shown that the lack of this discipline and climate affects the quality of internal controls and leads to the risk of improper behavior. Although specific weaknesses relating to travel, personnel, and the procurement of services were noted for both components, the congressional CMB made a considerable effort to establish an internal control environment that provided a foundation and tone of discipline and structure. This effort included use of written approvals, implementing recommendations based on a contracted study to improve internal controls, and contracting for independent financial audits. We found a comparatively weak internal control environment for the presidential CMB, as indicated below. Lack of Administrative Leadership. We found that many of the presidential CMB executive director’s own actions were not conducive to fostering a positive control environment. This contributed to a weak administrative management environment and poor employee attendance. No Evidence of Authority to Travel. The CMB joint travel policy requires written travel orders and approvals. According to the presidential CMB executive director, authority to travel was given orally, without documentation of dates, location, purpose, and estimated cost. Without written approval for travel, there is no written evidence that travel was authorized for official purposes or that advance approval was obtained for amounts that exceeded published government lodging rates, which, for hotel charges, frequently occurred. Excessive Reliance on GPO. The presidential CMB depended almost entirely on GPO to correctly handle personnel matters, disburse funds, and charge expenses. No record of financial disbursements sent to GPO for payment was maintained and reconciled monthly with GPO; thus, there was no assurance that payments were made as intended. Poor Records Management. Personnel and contractor files and invoices to support disbursements were in disarray, making it very difficult to locate documents. The presidential CMB relied heavily on GPO to maintain the support for its disbursements and could not find many original documents supporting disbursements, such as vendor invoices, credit card statements, and evidence that items purchased were received. In many instances, copies of credit card statements obtained from GPO or the credit card company were the only supporting documentation for purchases. Personnel and contract files were also incomplete. Although a significant effort was made by staff and even a contractor hired specifically to locate, copy, and organize documents, after nearly 5 months, the presidential CMB was unable to provide adequate support for about $118,000 of disbursements. No Evidence for Requisitions. We found no documented advance approvals for purchases of services and supplies, resulting in a lack of control over how federal dollars were spent. We recommend that the CMB executive directors to the extent practicable and cost beneficial, (1) correct leave balances, (2) recover personal charges for telephone usage, and (3) recover missing property. We recommend that the presidential CMB executive director improve CMB records management system to maintain adequate support for personnel, contract, and disbursement activities and maintain a record of authorized financial disbursements sent to GPO for payment and reconcile authorized payments with GPO disbursements monthly. As discussed in its response to a draft of this report in appendix IV, CMB plans to implement all of our recommendations. We identified transactions of CMB board members, employees, and contractors considered to be related parties based upon agreed upon criteria. Our disclosure of related-party relationships and transactions does not imply any improprieties, but is in response to your request for this information. We identified no related-party relationships or transactions regarding any (1) businesses owned, operated, or directed by CMB board members or employees and doing business with CMB, or (2) CMB transactions with family members of board members or employees. We identified a financial relationship among CMB board members and employees, in which a former presidential CMB board member personally loaned the presidential CMB executive director between $50,000 and $100,000. Finally, we identified transactions involving prior substantive relationships among CMB officials, including employer/employee or contractor affiliations. We identified 13 congressional and 11 presidential CMB related-party relationships involving about $1 million in salaries, benefits, and contracts for each side. Most of these related- party relationships arose from (1) prior employment or contracting with organizations of board members and (2) cases in which CMB employees later became CMB contractors or vice versa. Further disclosure on related parties, transactions, and amounts appears in appendix I. CMB made a reasonable effort to file with an appropriate federal oversight entity financial disclosure forms for CMB personnel paid over certain specified limits. However, no entity would accept the forms, and CMB resorted to appointing its congressional and presidential legal counsels as recipients of the financial disclosure forms. The Office of Government Ethics for the executive branch informed CMB that it would not accept CMB disclosure forms for filing since CMB was not an executive agency. The Senate Committee on Ethics informed CMB that it only accepted forms for legislative branch boards and commissions created in a year ending with an even number, such as 1998. The House Committee on Ethics informed CMB that it usually accepted forms for boards and commissions created in a year ending with an odd number, such as 1997, applicable to CMB. However, since the compensation of CMB employees is not disbursed by the Clerk of the House, it would not accept CMB disclosure forms for filing. As part of future legislation establishing future boards and commissions, the Congress should consider including a provision that specifies whether the entity is to be covered by the employee financial disclosure provisions of the Ethics in Government Act of 1978 and, if covered, the office to which the disclosure reports are to be submitted for review. In comments on a draft of this report, CMB said it plans to implement our recommendations on administrative policies and practices, effective October 1, 2000. CMB further said that it is planning to engage an outside accounting firm to help incorporate our recommendations on internal controls and accounting procedures. CMB stated that it “was pleased that we did not find any misuse of government funds and that no board funds were missing.” CMB also noted that the board has been given a clean bill of health on the seven specific matters discussed in the report where we found little documented evidence to substantiate possible improprieties and that all the incidental errors in reimbursements we found have been corrected. CMB concluded that the steps it is taking as a result of our audit reflect its commitment to safeguard taxpayer funds. In addition, in their letter commenting on a draft of this report, the legal counsels to CMB did not disagree with the report. The counsels also agreed with our matter for congressional consideration that as part of future legislation establishing boards and commissions, the Congress should consider specifying whether an entity is to be covered by the employee financial provisions of the Ethics in Government Act of 1978 and, if so, to whom the entity should report. We are pleased that CMB is correcting its internal control weaknesses and its policies, including conformity with laws applicable to legislative branch agencies. However, we disagree with the CMB characterization that we did not find any misuse of government funds. We believe that the inappropriate and wasteful practices we found constitute misuse of government funds, such as employees receiving full time salaries that were consistently working less than 8 hours per day; employees not recording leave when taken; employees not paying their government credit card charges; wasteful telecommunications expenses, including uncontrolled personal telephone usage for the presidential CMB; duplicate reimbursement for meals while employees were traveling; personal side trips at some cost to the government; two extra holidays and an unlimited sick leave policy; and missing or lost federal property. Some of these inappropriate or wasteful charges to the federal government were corrected after we notified CMB. These included some corrections of leave balances and reimbursement from a board member for charges for his family and other erroneous travel reimbursements. CMB has advised us that other actions are underway to reduce losses to the federal government, including obtaining payment of all delinquent government credit card charges. In addition, we have recommended that the CMB executive directors to the extent practical and cost beneficial (1) correct leave balances, (2) recover personal charges for telephone usage, and (3) recover missing property. As discussed in appendix II, we did not quantify the aggregate impact of improper charges discussed in this report. In addition, the scope of our audit was restricted because CMB could not provide us with adequate documentation to support $119,000 of disbursements. Consequently, we were unable to determine whether any other inappropriate or wasteful practices occurred for these disbursements. In its comments, GPO noted that it provided a service bureau function to CMB and that the accuracy and timeliness of financial and annual leave accounting was dependent on CMB. GPO also stated that CMB is responsible for maintaining a system of internal controls and retaining support for financial transactions. Further, although government credit cards were issued through GPO, CMB was responsible for their official use and prompt payment by the cardholder, as stated in the joint CMB travel policy. CMB had several other specific comments on the draft of this report, which we incorporated as appropriate. The complete text of CMB and its legal counsels’ responses to our draft report are presented in appendix IV. The complete text of GPO’s response to our draft report is presented in appendix V. We are sending copies of this report to Senator George V. Voinovich, Chairman, Subcommittee on the Oversight of Government Management, Restructuring, and the District of Columbia, Senate Committee on Governmental Affairs; Representative Dan Burton, Chairman, Representative Henry A. Waxman, Ranking Minority Member, House Committee on Government Reform; Senator Fred Thompson, Chairman, Senator Joseph I. Lieberman, Ranking Minority Member, Senate Committee on Governmental Affairs; Ken Blackwell and Gilbert Casellas, Co-chairmen, Congressional Monitoring Board; and other interested parties. If you or your staffs have questions about this report, please contact me on (202) 512-9505. Staff who made key contributions to this report are listed in appendix VI. Employment of a congressional CMB field staff member, at an annual starting salary of $52,000, who was a former employee of a congressional CMB board member’s state government organization. Employment of the congressional CMB press secretary, at an annual starting salary of $52,000, who was a former employee of a congressional CMB board member’s state government organization. Employment of a congressional CMB field staff member, at an annual starting salary of $78,000, who was a former employee of a congressional CMB board member’s state government organization and political campaign. This employee later became a congressional CMB contractor with two successive contracts to provide state and local government outreach services and Bureau of the Census local office coordination at an amount not to exceed $81,000 annually, including expenses. Engagement of a former congressional CMB employee who served as a senior analyst, at an annual starting salary of $85,000, who became a congressional CMB contractor for statistical issues and was paid $5,907. Engagement of a former congressional CMB employee who served as field staff, at an annual starting salary of $78,000, who became a congressional CMB contractor to provide community outreach services under a contract that paid $18,384 and a successive contract not to exceed $70,200, including expenses. Employment of a former congressional CMB contractor for community outreach services, who was paid $4,950 as a contractor, and became the congressional CMB director of outreach at an annual starting salary of $100,000. Engagement of a congressional CMB contractor for two successive contracts totaling $66,235 to conduct focus groups. This contractor previously performed work for a congressional CMB board members’ political campaign and an organization affiliated with another congressional CMB board member. Engagement of a congressional CMB contractor for community outreach services under a contract that paid $4,111 and a successive contract not to exceed $25,000, including expenses. This contractor previously performed work for a congressional CMB board member’s political campaign. Engagement of a congressional CMB contractor for community outreach services under a contract not to exceed $25,000, including expenses. This contractor previously performed work for a congressional CMB board member’s political campaign. Engagement of a congressional CMB contractor for community outreach services under a contract that paid $51,901. This contractor previously performed work for a congressional CMB board member’s political campaign. Engagement of a congressional CMB contractor for community outreach services under a contract that paid $44,432. This contractor previously performed work for a congressional CMB board member’s political campaign. Engagement of a congressional CMB legal counsel, with a contract to provide legal services for $60,000 annually plus expenses, who previously performed work for a company affiliated with a congressional CMB board member. Engagement of a congressional CMB contractor for a contract to provide automated document management systems for $46,775. This contractor provided software from a company that had a prior business relationship with the congressional CMB executive director, who recused himself from contractor activities to avoid a potential conflict of interest. Employment of the presidential CMB executive director, at an annual starting salary of $118,400, who was a contractor to a former presidential CMB board member federal organization and served on the member’s staff in the U.S. House of Representatives. Employment of the presidential CMB director of operations, at an annual starting salary of $75,000, who was an employee of a former presidential CMB board member’s federal organization. Employment of the presidential CMB director of finance, at an annual starting salary of $69,000, who was an employee at a business where a former presidential CMB board member was a director. This employee was hired by the executive director several months after the board member left CMB. Employment of presidential CMB deputy director of policy, at an annual starting salary of $62,500, who was a former employee of a former presidential CMB board member’s federal organization. Employment of a presidential CMB administrative assistant, at an annual starting salary of $40,000, who was a former employee of a presidential CMB board member’s business. Employment of a presidential CMB administrative assistant, at an annual starting salary of $35,000, who was a former employee of a presidential CMB board member’s business. Employment as presidential CMB deputy executive director, at an annual starting salary of $86,000, of a former presidential CMB contractor who was paid $23,268 to provide 3 months of writing and editing services. Employment as presidential CMB director of communications, at an annual starting salary of $69,000, of a former presidential CMB contractor who was paid $3,950 to provide 1 month of communications services. Engagement by a former presidential CMB board member of his personal attorney as presidential CMB legal counsel who is paid $60,000 annually, plus expenses. Engagement of a presidential CMB contractor for two successive contracts totaling $44,000 plus expenses to advise on human resources policy and employment law who previously performed work for a presidential CMB board member’s organization. Engagement of a presidential CMB contractor for two successive contracts totaling $199,500 plus expenses to provide public outreach support. An owner of this contractor business served on the staff of a former presidential CMB board member when he was a member of the House of Representatives. Our scope was to obtain and review information on seven specific matters requested, as follows: 1. Were presidential CMB funds used to print reports for the 1998 World Exposition? We reviewed presidential CMB disbursements over $200. In addition, we interviewed presidential CMB officials about this matter and examined an invoice and personal checks for the printing of these reports. 2. Did congressional CMB videotapes have a narrow political distribution and did internal controls exist over the use of copyrighted material? We obtained a congressional CMB report of videotapes produced and the number of copies and their distribution. We also determined whether any internal controls existed over the use of copyrighted material used in the videos. In addition, we interviewed the congressional CMB executive director about this matter. 3. Were congressional and presidential CMB funds used in connection with political events? We reviewed all out-of-town travel and determined the purpose of the trips. Specifically, we reviewed for possible political purposes the travel of the congressional CMB co-chairman while he was serving as chairman of a political campaign from June 1999 through February 2000; former presidential CMB co-chairman from June 1998 through May 1999, when he resigned from CMB to become a political campaign manager; and congressional and presidential CMB staff before political primaries in Iowa in January 2000 and New Hampshire and Arizona in February 2000. 4. Were four presidential CMB contracts for studies on census undercounting properly procured? We interviewed presidential CMB employees. We also reviewed the board-adopted policies for its Procurement of Temporary or Intermittent Services; the consultant’s contracts, including modifications and related invoices; the proposals from two consultants, including the selected consultant; and the consultant’s subsequent written report and media reports concerning this study. 5. Were protected census data accessed by two former congressional CMB employees? We interviewed the (1) congressional CMB executive director about this matter, (2) Bureau of the Census officials, and (3) the two former CMB employees who could possibly have violated Title 13 when they left CMB to work for the Republican National Committee on redistricting issues. We also obtained from the Bureau of the Census logs of requests by CMB including Title 13 information. 6. Were questions asked about political parties in a congressional CMB contractor focus study? We interviewed the congressional CMB co-chairman, another board member, the executive director, and the contractor about this matter. We viewed the two focus group videotapes and reviewed the consultant’s contract and related invoice, subsequent written report, questions that were asked, and other documents and media reports concerning the study. 7. Was there an oral confrontation by a congressional CMB contractor and how was the matter subsequently resolved? We reviewed an local census office site visit report prepared by the contractor, obtained a detailed e-mail describing the visit by the Bureau of the Census LCO manager concerned, and interviewed the congressional CMB executive director and one of the area managers who was present during the LCO meeting. We also reviewed the EmployeeCodeofConductand the FieldPersonnelGuidelinesfor FieldVisits, which were adopted after this matter. In addition, our scope was to audit all CMB out-of-town travel disbursements and all other disbursements over $200 from the first transactions in June 1998 through March 31, 2000, including any unusual transactions that came to our attention after this period. Disbursements from CMB appropriated funds would include congressional, presidential, and joint accounts with a focus on personnel, travel, and services. We also audited CMB printing and computer disbursements over $200 charged to the Congressional Printing and Binding appropriation. Further, our scope included evaluating the internal control environment that existed for both sides of CMB; identifying CMB financial policies, procedures, and internal controls and whether they were followed and effective; determining if CMB disbursements were incurred for official CMB government purposes; and identifying related-party relationships and transactions. evaluate the effectiveness of CMB program activities in monitoring the 2000 Census; conduct a search for CMB transactions incurred before March 31, 2000, but paid later; determine whether royalty payments should have been made by the congressional CMB on copyrighted material used in the production of videotapes; evaluate CMB official travel or other activity beyond the stated purpose of the trip; evaluate the congressional CMB contractor’s focus group methodology or questions that were asked; conduct physical inventories of property; determine the extent of personal telephone usage; determine the extent of long distance telephone usage on office telephones; quantify the aggregate impact of improper charges discussed in this report; audit transactions with inadequate support; or determine whether frequent flyer mileage earned was used for personal travel. To determine the amount and purpose of CMB disbursements, we obtained monthly financial reports of CMB disbursements that GPO processed and paid from CMB appropriated funds. We also obtained a list of CMB printing and binding costs charged to the Congressional Printing and Binding appropriation as permitted by CMB legislation. We examined documents supporting disbursements maintained by CMB; performed a financial analysis of activity; and conducted interviews with the two CMB executive directors, most current CMB employees, and some CMB board members and contractors. We also reviewed the minutes of the board meetings; semiannual reports required by CMB legislation; and for the congressional CMB, an internal controls study and draft audit reports for fiscal years 1998 and 1999 issued by two CPA firms. In addition, we analyzed a GPO reconciliation of expenditures and appropriation accounts with the CMB Fund Balance with Treasury as of March 31, 2000. To determine the internal control environment for both sides of CMB, we interviewed both executive directors and selected staff, examined documents to identify key factors that affect the control environment, and identified weaknesses. Based upon our StandardsforInternal ControlintheFederalGovernment, these factors include entity management operating style, commitment to competence, organizational structure, authority and responsibilities, human capital, and relationships with the Congress. To determine CMB financial policies, procedures, and internal controls and whether they were followed and effective, we obtained written documents adopted by the board and other written and verbal procedures unique to both sides of CMB. We examined specific internal controls of the congressional and presidential CMB using the Standards forInternalControlintheFederalGovernment. For each side, we considered control risk, reviewed control activities, identified information and communications, examined controls used to monitor disbursements, and identified weaknesses. To determine if CMB disbursements were for official CMB government purposes, we examined supporting documentation for personnel compensation and benefits, procurement of services, and travel. We used our GuideforEvaluatingandTestingControlsOverSensitive Payments, which encompasses activities vulnerable to potential abuse, such as compensation, procurement of services, and travel. To determine whether the supporting documentation we examined was adequate, we used the StandardsforInternalControlintheFederal Government, which states that all transactions need to be clearly documented and the documentation should be readily available for examination. In addition, these standards require that records be properly managed and maintained. For CMB disbursements, we considered the supporting documentation adequate when the support included written approvals of the transactions, vendor invoices, credit card statements, and evidence of receipt, such as packing slips or receiving forms. To determine what related-party relationships and transactions existed between CMB current and former employees or contractors, we applied the following criteria to disclose any related party: Businesses owned, operated, or directed by CMB board members or employees. CMB transactions with family members of board members or employees. CMB board members and employees with financial relationships with each other. CMB board members and employees with prior substantive business relationships with employees or contractors. To identify related-party transactions, we examined financial disclosure forms and personnel files; conducted interviews with both CMB executive directors, staff, and some contractors; and obtained written confirmations from all CMB board members. Our identification of related-party relationships and transactions does not imply any improprieties, but is in response to your request for this information. For guidance on related party transactions and disclosures, we used Statement of Financial Accounting Standards No. 57, RelatedParty Disclosures, and Statement on Auditing Standards No. 45, Related Parties. We also reviewed the legislation establishing CMB and various laws and regulations applicable to personnel, travel, procurement, ethics, and other matters. As specific legal issues were identified, we also reviewed court cases, Comptroller General decisions, Attorney General opinions, and other authoritative literature. Our work was performed in Washington, D.C., and Suitland, Maryland, from April through August 2000 in accordance with generally accepted government auditing standards. From its beginning, CMB expressed the view that it had characteristics of both the executive branch and the legislative branch and established policies and procedures it considered appropriate under the circumstances. However, in our opinion, CMB is an agency in the legislative branch and is subject to laws generally applicable to the legislative branch, unless a provision of law specifically provides otherwise. The legislation establishing CMB did not define its status or, with a few exceptions, specify laws that would generally govern its operations. With respect to certain administrative matters, CMB adopted policies that were not consistent with the laws generally applicable to agencies of either the executive or legislative branch. The verbatim minutes of the first CMB board meeting consistently reflect the view that CMB is not wholly within the executive or legislative branch and that some issues may result from the board being “in the middle somewhere.” For example, in discussing a code of conduct for board members, the board was described as “a hybrid, in some respects, it acts like an executive branch agency, and for others, like a congressional agency.” CMB minutes indicate that questions as to the application of federal law would be addressed case by case. However, as the board adopted policies governing administrative matters, such as leave and travel, we found no legal analysis supporting departures from the laws generally applicable to agencies of either the executive or legislative branches. Further, we have received no evidence from CMB that reflects a clear view of its legal status and what laws would generally govern its operations. As a federal governmental entity, CMB must be in either the executive or legislative branch.Because congressional leaders appoint some CMB board members and CMB reports to the Congress and exercises no executive or regulatory functions, CMB is not an entity in the executive branch.Further, there is ample legal precedent for considering entities like CMB to be in the legislative branch. For example, we concluded that the National Commission on Air Quality, which had presidential and congressional members and whose function was to study and report to the Congress, was in the legislative branch.Similarly, in a January 2, 1999, opinion, the Office of Legal Counsel, Department of Justice, concluded that the National Gambling Impact Study Commission, an entity similar to CMB, was part of the legislative branch. Even when the President appointed all members of a commission, we concluded that the commission was in the legislative branch if the commission had no executive powers and its only functions were to study and report to the Congress.Accordingly, we conclude that CMB is an agency of the legislative branch. The following are GAO’s comments on CMB’s letter dated September 22, 2000. 1. We disagree with the presidential CMB executive director that the charges to the executive assistant’s government travel card were properly reconciled and paid. We found that charges for lodging of a presidential CMB board member’s family and an erroneous reimbursement of travel expenses to that board member resulted in part because of weak reconciliation of credit card usage. These charges were all made to the executive assistant’s government credit card and were approved for payment by the presidential CMB executive director. These inappropriate charges to the federal government were found as a result of our audit rather than being detected by the presidential CMB reconciliation or other internal controls. Additionally, the presidential CMB states that the individual’s credit card was utilized for simplicity sake, this practice is contrary to the agreement with the federal government’s credit card issuer and the CMB joint travel policy which states that only the cardholder may use the card. Presidential CMB staff were placed in the difficult position of being personally liable for charges that others incurred to their federal account. This situation, coupled with a late reimbursement by the presidential CMB, caused the executive assistant’s account to be delinquent and subsequently cancelled by the credit card company with a potentially adverse effect on her personal credit rating. 2. We disagree that $4,000 of the $4,500 of personal charges by the presidential CMB executive director was mistakenly put on an office MasterCard instead of a personal MasterCard last winter. This amount relates to hotel bills charged to his executive assistant’s government credit card in March 2000 while the presidential CMB executive director was on leave in Florida. These charges were part of a pattern of inappropriate personal use by the presidential CMB executive director of his own and three other employees’ government credit cards over a span of nearly 2 years. After his government credit card was cancelled, the presidential CMB executive director used three other employees’ government credit cards for personal travel and local restaurant bills from March 2000 through July 2000. From interviews with the executive director, he had physical possession of the other employees’ credit cards when he used them at local restaurants. Although these personal expenses were not charged to or reimbursed by CMB, we found that the inappropriate personal use was not a mistake. 3. The legal counsels for the congressional and presidential CMBs commented on the lack of clarity concerning CMB’s entity status and refer to three letters on ethics issues to illustrate that lack of clarity. As this report discusses in appendix III, there is substantial legal precedent supporting the conclusion that CMB is a entity in the legislative branch. Further, we do not believe that the letters cited by the counsels contributed to a lack of clarity regarding CMB’s entity status. In fact, the letter from the Office of Government Ethics states that CMB would appear to be a legislative branch entity. Further, the letters from the Senate Select Committee on Ethics and the House Committee on Standards of Official Conduct each advised that CMB was not subject to the respective body’s Code of Conduct for reasons unrelated to CMB entity status.
Pursuant to a congressional request, GAO reviewed the financial management of the Census Monitoring Board (CMB), focusing on: (1) information on seven specific matters contained in congressional requests; (2) an audit relating to all CMB out-of-town travel disbursements and all other financial transactions over $200 from CMB's inception in June 1998 through March 31, 2000, which resulted in GAO auditing about 98 percent of the dollar value of total CMB disbursements; (3) CMB financial policies and practices, the internal control environments, and specific internal controls over disbursements, including those related to travel, personnel, and procurement of services; and (4) related-party transactions that met the criteria congressional members asked GAO to use. GAO noted that: (1) it found little documented evidence to substantiate possible improprieties in connection with seven specific matters identified in congressional request letters: (a) no presidential CMB funds were used to print reports for the 1998 World Exposition; (b) congressional CMB videotapes did not have a narrow political distribution; (c) no CMB funds were used for political travel; (d) presidential CMB contracts for studies on census undercounting were not improperly procured; (e) no evidence existed that former congressional CMB employees accessed protected census data; (f) two out of 27 questions in a congressional CMB contractor focus group study made some mention of political parties; and (g) some verbal confrontation occurred between a congressional CMB contractor and Bureau of the Census employees, and the contract was terminated shortly thereafter for a variety of reasons; (2) GAO's remaining efforts focused on CMB support for expenditures and an assessment of the internal control environment established to ensure disciplined financial operations; (3) the majority of CMB disbursements were generally supported and related to official business; (4) however, GAO found a pattern of significant CMB internal control weaknesses related to travel, personnel, and the procurement of services, some of which resulted in inappropriate and wasteful practices; (5) weak internal controls allowed unreconciled payroll, benefits, and annual leave accounts, weak contract accounting, and disbursements without required approvals to pay; (6) some CMB policies were inconsistent with federal law; (7) while weaknesses related to travel, personnel, and procurement existed for both sides, the congressional and presidential CMB operated in substantially different internal control environments; (8) the congressional CMB made a considerable effort to establish an internal control environment; (9) the presidential CMB operations were primarily characterized by weak or unenforced policies, oral authorizations, and poor records management, largely due to a lack of administrative leadership; (10) GAO identified transactions involving prior business relationships among CMB officials, including employer/employee or contractor affiliations; (11) GAO found 13 congressional and 11 presidential CMB related-party relationships involving about $1 million in salaries and contracts for each side; and (12) GAO's disclosure of related-party relationships and transactions does not imply any improprieties.
The EFV is the Corps’ number-one priority ground system acquisition program and is the successor to the Marine Corps’ existing amphibious assault vehicle. It is designed to transport troops from ships offshore to their inland destinations at higher speeds and from farther distances, and to be more mobile, lethal, reliable, and effective in all weather conditions. It will have two variants—a troop carrier for 17 combat-equipped Marines and a crew of 3 and a command vehicle to manage combat operations in the field. The Marine Corps’ total EFV program requirement is for 1,025 vehicles. Figure 1 depicts the EFV system. The EFV’s total acquisition cost is currently estimated to be about $12.6 billion. In addition, the EFV accounts for a substantial portion of the Marine Corps’ total acquisition budget for fiscal years 2006 through 2011, as figure 2 shows. The EFV program began its program definition and risk reduction phase in 1995, and was originally referred to as the Advanced Assault Amphibious Vehicle. The Marine Corps’ existing assault amphibious vehicle was originally fielded in 1972 and will be over 30 years old when the EFV is fielded. Several Marine Corps studies identified deficiencies in the existing vehicle, including the lack of necessary lethality to defeat projected emerging threats. Despite efforts to extend the service life of the existing vehicle, Marine Corps officials stated that serious warfighting deficiencies remained. The studies concluded that the existing vehicle was unable to perform the type of combat missions envisioned by the Marine Corps’ emerging combat doctrine and that a new vehicle was needed. In September 2003, DOD officially changed the name of the new vehicle to the EFV, which was in keeping with the Marine Corps’ cultural shift from the 20th century force defined by amphibious operations to a 21st century force focusing on a broadened range of employment concepts and possibilities across a spectrum of conflict. The new vehicle is a self- deploying, high water-speed, amphibious, armored, tracked vehicle, and is to provide essential command, control, communications, computers, and intelligence functions for embarked personnel and EFV units. These functions are to be interoperable with other Marine Corps systems as well as with Army, Air Force, Navy, and NATO systems. The EFV transitioned to SDD in December 2000. The use of a knowledge-based acquisition approach was evident at the onset of the EFV program. Early in the program at the start of program definition and risk reduction, the Marine Corps ensured that four of the five critical program technologies were mature. Although the fifth technology (the moving map navigation technology, which provides situational awareness) was not mature at this same time, it was sufficiently matured after the program transitioned to SDD. Furthermore, the EFV design showed evidence of being stable by the completion and release of design drawings. At critical design review, 84 percent of the drawings were completed and released. The program now has 100 percent of the EFV drawings completed. Program officials expect that only about 12 percent of the design drawings are likely to be changed in the future as a result of planned reliability testing. Since entering SDD in December, 2000, the EFV program’s total cost has grown by about $3.9 billion, or 45 percent. Production quantities have been reduced by about 55 percent over fiscal years 2006-2011, thereby reducing the capabilities provided to the warfighter during this period. Cost per vehicle has increased from $8.5 million to $12.3 million. However, total quantities remain unchanged. During the same period, the EFV’s development schedule has grown by about 4 years, or 35 percent. Furthermore, a key requirement has been lowered. EFV reliability—a key performance parameter—has been reduced from 70 hours of continuous operation to 43.5 hours. Thus, overall EFV buying power has been reduced, for it will now take substantially more money than was estimated at the start of SDD to acquire the same number of vehicles later and more slowly, and with a reduced operational reliability requirement. Since entering SDD in December 2000 and holding the SDD critical design review in January, 2001, the EFV program’s total acquisition cost has grown by about $3.9 billion, or 45 percent, to $12.6 billion. Figure 3 shows how costs have grown over time. While total quantities have not changed, production quantities over fiscal years 2006-2011 were reduced by about 55 percent, from 461 vehicles to 208. This means that the warfighter will get the capability the EFV provides more slowly. The EFV program has been rebaselined three times since SDD began, as shown in table 1. Because the rebaselines have occurred incrementally over time, the EFV program has not previously been required to submit a unit cost increase report to Congress. Congress in 1982 enacted the unit cost reporting statute, now codified in 10 USC 2433, which is commonly referred to as Nunn-McCurdy, after the congressional leaders responsible for the requirement. The statute required the Secretary of Defense to certify a program to Congress when the unit cost growth in constant dollars reaches 25 percent above the most recent rebaseline cost estimate and report to Congress when it reaches 15 percent. The National Defense Authorization Act for fiscal year 2006 made changes to Nunn-McCurdy. The primary change that affects the EFV program was the additional requirement to report 30 percent unit cost growth above the original baseline estimate approved at SDD. The EFV program recently reported an increase in the EFV’s program average unit cost increase of at least 30 percent above its original baseline estimate at SDD. Although the EFV program acquisition unit costs have increased by about at least 30 percent since SDD began, no single increase between rebaselines has reached the 15 percent reporting threshold. Overall, the program schedule has grown by 48 months or 35 percent from December 2000 at the start of SDD to the most recent rebaselining in March 2005. This schedule growth has delayed the occurrence of key events. For example, the EFV program was originally scheduled to provide the Marine Corps with its initial operational capability vehicles in September 2006, but is now scheduled to provide this capability in September 2010. Details of key event schedule changes are shown in table 2. In 2005, the Marine Corps received approval to lower the EFV’s reliability requirement from 70 hours before maintenance is needed to 43.5 hours before maintenance is needed. This decision was based on a revised analysis of the EFV’s mission profile and the vehicle’s demonstrated reliability. At the start of SDD, the EFV’s operational reliability requirement was 70 hours of operation before maintenance is needed. Program officials told us this 70-hour requirement was based on the EFV’s mission profile at the time, which called for a “do-all” mission for one 24.3 hour period of operation. The original reliability growth plan anticipated that this requirement would be met after initial operational test and evaluation, which was then planned for August 2007. In 2002, the Marine Corps’ Combat Development Command performed an independent analysis of the original 70-hour reliability requirement and determined that it was likely that it would be very difficult to achieve. Additionally, the analysis determined that this requirement was excessively high when compared to similar types of vehicles. In fiscal year 2004, DOD’s Director of Operational Test and Evaluation (DOT&E) office reported that overall EFV reliability remained a significant challenge because of the system’s comparative complexity and harsh operating environment. In 2004, The Marine Corps’ Combat Development Command reviewed the 70-hour requirement and recommended that it be reduced to 43.5 hours. According to program officials, the primary reason for the reduction to 43.5 hours was to more accurately depict the Marine Corps’ current mission profile for the EFV, which calls for a 12.5-hour mission day. The Joint Requirements Oversight Council approved the reliability reduction to 43.5 hours in January 2005. The program’s development schedule did not allow enough time to demonstrate maturity of the EFV design during SDD. The critical design review was held almost immediately after SDD began. Testing of early prototypes continued for 3 years after the decision to begin building the SDD prototypes. Test schedules for demonstrating design maturity in the integrated, full-system SDD prototypes proved optimistic and success- oriented, and were extended twice. After the schedules were extended, major problems were discovered in testing the prototypes. Conceptually, as figure 4 illustrates, SDD has two phases: a system integration phase to stabilize the product’s design and a system demonstration phase to demonstrate the product can be manufactured affordably and work reliably. The system integration phase is used to stabilize the overall system design by integrating components and subsystems into a product and by showing that the design can meet product requirements. When this knowledge is captured, knowledge point 2 has been achieved. Leading commercial companies use several criteria to determine that this point has been achieved, including completion of 90 percent of engineering drawings and prototype or variant testing to demonstrate that the design meets the requirements. When knowledge point 2 is reached, a decision review—or critical design review—is conducted to ensure that the program is ready to move into system demonstration. This review represents the commitment to building full-scale SDD prototypes that are representative of the production vehicle. The system demonstration phase is then used to demonstrate that the product will work as required and can be manufactured within targets. When this knowledge is captured, knowledge point 3 has been achieved. DOD uses this conceptualization of SDD for its acquisition policy and guidance. The EFV program met most of the criteria for SDD critical design review, which it held January 2001, about 1 month after entering SDD. In particular, it had 84 percent of drawings completed and had conducted early prototype testing during the last year of program definition and risk reduction. However, this early prototype testing had not been fully completed prior to critical design review. Testing of the early prototypes continued for 3 years into SDD, well after the program office established the SDD critical design decision to begin building the SDD prototypes. The program did not allow enough time to demonstrate maturity of the EFV design during SDD. The original SDD schedule of about 3 years proved too short to conduct all necessary planning and to incorporate the results of tests into design changes. Specifically, the original schedule did not allow adequate time for testing, evaluating the results, fixing the problems, and retesting to make certain that problems are fixed before moving forward. Testing is the main process used to gauge the progress provided to the customer. Consequently, it is essential to build sufficient testing and evaluation time into program development to minimize or avoid schedule slippages and cost increases being made when an idea or concept is translated into an actual product. Evaluation is the process of analyzing and learning from a test. The ultimate goal of testing and evaluation is to make sure the product works as intended before it is provided to the customer. Consequently, it is essential to build sufficient testing and evaluation time into program development to minimize or avoid schedule slippages and cost increases. Prior to entering SDD, during both the concept evaluation and the program definition and risk reduction phases, the EFV program conducted a variety of component and subsystem tests. This testing included an engineering-model and prototype-testing program, as well as modeling and simulation test programs. Early EFV testing also included early operational assessment tests on the initial prototype developed during program definition and risk reduction. During this phase, the EFV program demonstrated key aspects of performance including the technological maturity to achieve the high water speed and land mobility needed for the EFV mission. In addition, a number of subsystem tests were conducted on key components of the EFV, including the main engine; water jets; propulsion drive train components; weapons; nuclear, biological and chemical filters; track, suspension units; and nearly all of the vehicle electronics. Nevertheless, the SDD schedule was extended twice to ensure adequate system-level testing time. In November 2002, the program office extended the test schedule by 12 months for additional testing prior to low-rate initial production. According to program officials, this extension was necessary for several reasons. Lessons learned from testing the early prototypes necessitated design changes in the SDD prototypes, which delayed delivery and testing of the SDD prototypes. In addition, testing was taking longer than anticipated, additional time was needed for reliability testing, and more training was required to qualify crews prior to certain events. For example, the results of the early EFV firepower, water operations, and amphibious ship testing revealed the need for more testing. The schedule was delayed further to allow more time to demonstrate the reliability of the EFV using the SDD prototypes. In March 2003, DOT&E directed that the EFV test schedule be extended for yet another 12 months so that more developmental testing and more robust operational testing could occur before initial production. After the two schedule adjustments, testing of SDD prototypes revealed major problems in maturing the system’s design. Specifically, the program experienced problems with the HEU, bow flap, system hydraulics, and reliability. The HEU provides the computer processing for the EFV’s mobility, power, and auxiliary computer software configuration and for the command and control software application. Figure 5 shows the HEU. In November 2004, during integrated system-level testing on the SDD prototypes, there were major problems with the HEU. For example, the water-mode steering froze, causing the vehicle to be non-responsive to the driver’s steering inputs and both the HEU and the crew’s display panel shut down during EFV operation. Consequently, testing ceased until the causes of the problems could be identified and corrections made. The program office conducted a root-cause analysis and traced the problems to both hardware and software sources. The program office made design changes and modifications to correct the problems, and testing resumed in January 2005, after about a 2-month delay. According to program officials, these changes and modifications were installed by May 2005, in the vehicles that will be used to conduct the operational assessment tests. Again, according to program officials, these problems have not recurred. However, the HEU has experienced some new problems in testing since then. For example, in June 2005, some status indicators on the crew’s display panel shut down during land operations and had to be rebooted. Program officials commented that corrective actions for HEU problems have been initiated and tested to ensure that the actions resolved the problems. We did not independently verify program officials’ statements about initiation and testing of corrective actions. The bow flap is a folding appendage on the front of the EFV that is hydraulically extended forward during EFV water operations. The bow flap provides additional surface area that is used to generate additional hydrodynamic lift as the vehicle moves through the water. Figure 6 shows the bow flap. Prior to entering SDD, major problems occurred with an earlier version of the bow flap in testing using early prototypes. Root-cause analysis traced these problems to bow flap overloading. Consequently, the bow flap was redesigned but was not retested on the early prototypes before the new design was installed on the SDD prototypes. Problems with the new bow flaps occurred during subsequent SDD prototype testing. For example, in September and October 2004, two bow flaps failed—one bent and one cracked. Again, the program office conducted a root-cause analysis, which determined that loading—while no longer excessive—was inappropriately distributed on the bow flaps. Following corrective action, tests were conducted in Hawaii during July to August 2005 to validate the load capacity of the new bow flap. These tests revealed that the design of the new bow flap needed some refinements in order to meet the operational requirement that the EFV be capable of operating in 3-foot significant wave heights. A program official indicated that the test results will be used to refine the design of the new bow flap. However, the refined bow flap design will not be tested in the operationally required 3-foot significant wave heights until initial operational testing and evaluation, well after the program enters low-rate initial production. Hydraulic systems are key components in the EFV. For example, they control raising and lowering the bow flap, engine cooling systems, marine steering, and troop ramps. Hydraulic system failures are one of the top reliability drivers in the EFV program. If the reliability requirement is to be achieved, the myriad hydraulic problems must be resolved. The EFV has encountered hydraulic system problems on both early and SDD prototypes. The top four hydraulic system problems are: Leaks from all sources, particularly leaks due to the loosening of fittings and connectors because of vibration during EFV operations. Various component mechanical failures experienced during EFV testing. Hydraulic fluid pressure spikes, particularly in the EFV’s transmission and pumps. Hydraulic fluid contamination by air, water, and particulates. Program officials said that the program office has instituted a design/test/redesign process to identify deficiencies and implement corrections to increase vehicle reliability. According to program officials, this process brings together the program office, contractor, various subcontractor vendors of hydraulic components, and experts from industry and academia to address and correct hydraulic problems as they occur. Corrective actions thus far include: Leaks—better sealing of connections; installation of specialized, self-locking devices at connections most susceptible to vibration leaks; and replacement of rigid tubing with flexible hoses to absorb vibration. Component mechanical failures—redesigning, strengthening, and upgrading various parts. Hydraulic fluid pressure spikes—reducing gear shifting during EFV operations and installing devices to control pressure. Hydraulic fluid contamination—flushing hydraulic systems and instituting a variety of monitoring, maintenance, and inspection plans to maintain hydraulic fluid and component cleanliness requirements. Program officials noted that corrective actions thus far have been tested to ensure that they resolved the problems, and have been installed on the SDD prototype vehicles. We did not independently verify this. Based on lower demonstrated reliability and problems with early program testing, the EFV’s reliability has not grown as planned. Expectations for reliability are now lower, as reflected in the recent reduction to the reliability requirement. When SDD began, the EFV was expected to demonstrate 48 hours between failures by September 2005. Actual growth demonstrated 28 hours between failures in August 2005. At the time of the low–rate initial production decision now planned for December 2006, demonstrated reliability is projected to be 38 hours between failures. The original and current reliability growth curves for the EFV are shown in figures 7 and 8, respectively. In comparing the planned and actual reliability growth curves, it is clear that the actual test hours accumulated have been significantly less than planned. In fact, the original plan called for conducting 12,000 hours of testing by the original September 2005 production decision; according to the current plan, test hours will not reach this level until early 2008. The reduction in test hours is due, in part, to the other problems that occurred in testing. The accumulation of test hours is significant for reliability. In general, reliability growth is the result of an iterative design, build, test, analyze, and fix process. Initial prototypes for a complex product with major technological advances have inherent deficiencies. As the prototypes are tested, failures occur and, in fact, are desired so that the product’s design can be made more reliable. Reliability improves over time with design changes or manufacturing process improvements. The program office acknowledges that even with the changes in mission profile and reduction in the operational requirement, reliability for the EFV remains challenging. In addition, the most recent DOT&E annual report found that the EFV system’s reliability is the area of highest risk in the program. DOT&E has reviewed the EFV’s current reliability growth plan and believes that it is realistic but can only be validated during initial operational testing and evaluation in 2010. According to the program manager, an additional 15 months would have been needed for more robust reliability testing, production qualification testing, and training, after the program entered low-rate initial production in September 2005, as originally planned. The March 25, 2005, rebaselining extended the schedule by 24 months and postponed low-rate initial production until September 2006, which has now been extended to December 2006. While DOD’s December 2004, Program Budget Decision 753 served as the catalyst for this rebaselining, the program manager stated that he probably would have asked for a schedule extension of 15 months after entering low-rate initial production in September 2005, even if the budget decision had not occurred. DOD and Marine Corps officials verified that, although the program manager did not officially request this 15-month extension, he had been discussing an extension with them before the budget decision was issued. However, to the extent that the extra 9 months resulting from the budget decision prove unneeded for program management reasons, they will be an added cause for schedule and cost growth. Three areas of risk remain for demonstrating design and production maturity, which have potential cost and schedule consequences—risks to the EFV business case. First, while the EFV program has taken steps and made plans to reduce risk in the production phase, production risk remains in the program. Current plans are to enter low-rate initial production without requiring the contractor to ensure that all key EFV manufacturing processes are under control. Second, the EFV program will transition to initial production without the knowledge that software capabilities are mature. Third, two key performance parameters— reliability and interoperability—are not scheduled to be demonstrated until the initial operational test and evaluation phase in fiscal year 2010, about 4 years after low-rate initial production has begun. The program office has developed plans to resolve performance challenges and believes it will succeed. However, until the plans are actually implemented successfully, the EFV’s design and production maturity will not be demonstrated and the potential for additional cost and schedule increases remains. While the EFV program has taken steps and made plans to reduce risk in the production phase, production maturity risk remains in the program. Current EFV program plans are to enter low-rate initial production without requiring the contractor to ensure that all key EFV manufacturing processes are under control, i.e., repeatable, sustainable, and capable of consistently producing parts within the product’s tolerance and standards. Establishing such control is critical to ensuring that the EFV can be produced reliably and without unexpected production problems. In addition, DOD’s system acquisition policy provides that there be no significant manufacturing risks prior to entering low-rate initial production and that manufacturing processes be under statistical process control prior to starting full-rate production. Leading commercial firms rely on statistical process control to ensure that all key manufacturing processes are under control before they enter production. Statistical process control is a technique that focuses on reducing variations in manufactured parts, which in turn reduces the risk of entering production with unknown production capability problems. Reducing and controlling variability lowers the incidence of defective parts and thereby products, which may have degraded performance and lower reliability. Defects can also delay delivery and increase support and production costs by requiring reworking or scrapping. Consequently, prior to entering production, leading commercial firms collect and analyze statistical process control data. Leading commercial firms also use a measure of process control called the process capability index to measure both the consistency and the quality of output of a process. DOD’s acquisition policy applies a lower standard. It provides that there be no significant manufacturing risks prior to entering low-rate initial production and that manufacturing processes be under statistical process control prior to starting full-rate production. The EFV program is working toward the DOD standard. EFV program officials said that statistical process control will not be used to ensure that all key EFV manufacturing processes are under control prior to entering low-rate initial production. They stated that they have taken actions to enhance EFV production readiness. For example, they noted that one of the most important risk mitigating actions taken was ensuring that SDD prototypes were built using production-representative tooling and processes. Program officials also believe that production process maturity will be demonstrated by achieving repetitive schedule and quality performance during low-rate initial production. In addition, the program plans to collect statistical process control data during low-rate initial production to track equipment and machine performance and detect statistical shifts. The program believes that using statistical process control data in this manner will result in earlier detection of machine malfunctions. Program officials told us that once sufficient quantities of the EFV are produced and baseline statistical process control data collected, the results of the analyses of this data will be implemented for any production measurements that demonstrate process stability. The program office believes that this approach will allow for use of statistical process control for implementation of stable manufacturing processes during low-rate initial production. However, the program office does not plan to set and achieve a process capability index for the EFV production efforts. The actions taken by the program may help to mitigate some production risk. In fact, EFV’s plan to collect and use statistical process control data goes further than what we have found on most DOD weapon system programs. However, these actions do not provide the same level of confidence as having the manufacturing processes under statistical process control before production. The EFV program’s approach of foregoing such control increases the risk of unexpected production problems during manufacturing. This risk is compounded by the fact that plans call for reliability and interoperability, along with resolution of other technical problems, to be operationally tested and demonstrated during low-rate initial production, not before. Under current plans, the EFV program is at risk of entering low-rate initial production before software development capabilities are mature. Again, leading commercial firms ensure that software development capabilities are mature before entering production in order to prevent or minimize additional cost growth and schedule delays during this phase. Furthermore, DOD’s weapon system acquisition policy calls for weapon systems to have mature software development capabilities before they enter low-rate initial production. In assessing software capability maturity, commercial firms, DOD, and GAO consider the software capability maturity model developed by Carnegie Mellon University’s Software Engineering Institute to be an industry standard. This model focuses on improving, standardizing, and certifying software development processes, including key process areas that must be established in the software developer’s organization. The model is essentially an evolutionary path organized into five maturity levels: Level 1, Initial—the software process is ad hoc and occasionally chaotic. Few processes are defined, and success depends on individual effort. Level 2, Repeatable---basic project management processes are established to track cost, schedule, and functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar applications. Level 3, Defined—the software process for both management and engineering activities is documented, standardized, and integrated into a standard process for the organization. All projects use an approved, tailored version of the organization’s standard process for developing and maintaining software. Level 4, Managed—Detailed measures of the software process and product quality are collected. Both the software development process and products are quantitatively understood and controlled. Level 5, Optimizing—Continuous process improvement is enabled by quantitative feedback from the process and from plotting innovative ideas and technologies. The EFV program has had problems with maturing its software development capabilities. The EFV’s prime contractor, General Dynamics Land Systems (GDLS), which at the time had a level 3 maturity software capability, developed all software for the early EFV program. According to the program office, when the program entered SDD, responsibility for EFV’s software development was transferred to GDLS’ amphibious development division, General Dynamics Amphibious Systems (GDAMS). GDAMS has a level 1 maturity software capability. Consequently, the SDD contract required GDLS to achieve a software development capability maturity level 3 for all EFV software contractors and subcontractors within 1 year of the contract award date, July 2001. In January 2002, the program extended this requirement by 1 year, until July 2003. Nevertheless, while GDAMS twice attempted to achieve level 3 software development capability maturity, it did not succeed. Program officials considered GDAMS’s inability to achieve an acceptable level of software development capability maturity a risk to the program. To mitigate this risk, in January 2004, the program manager began developing a risk mitigation plan. As part of this plan, representatives from the EFV program office, GDAMS, and Ogden Air Logistics Center’s 309th Software Maintenance Group—a certified level 5 maturity software development organization—formed a Software Partnership Working Group to address software development capability maturity issues. As of February 2006, EFV program officials were in the process of negotiating a memorandum of agreement with the 309th Software Partnership Working Group to develop the EFV’s low-rate initial production software. The 309th will work in partnership with GDAMS as specified by the terms of the memorandum of agreement. Its involvement is to ensure that the EFV’s software development capability will be at the desired maturity level. However, the 309th Software Maintenance Group will not complete the software development for the EFV’s low-rate initial production version until September 2006. Furthermore, GDAMS does not plan to insert this software into the EFV vehicles until fiscal year 2008, well after low-rate initial production has begun. This means that the low-rate initial production decision will be made without the integration of mature software. Furthermore, the software itself will not be demonstrated in the vehicle until well into low-rate initial production. While the program office believes that the level of software risk is an acceptable level risk, we have found that technology—including software—is mature when it has been demonstrated in its intended environment. While involving the 309th Software Maintenance Group helps to mitigate the risk of immature software development capability in the EFV program, it increases certain other risks. The memorandum of agreement distributes the responsibility for software development between the three participants. However, much of the responsibility for developing a working software package in an acceptably mature environment shifts from the prime contractor to the Marine Corps. The software will now become government-furnished equipment or information. In essence, the Marine Corps has now assumed much of the risk in the software development effort. If the software does not work according to the requirements, it will be incumbent upon the Marine Corps—not the prime contractor, GDLS—to correct the problems. Furthermore, if the integration of the government-furnished software into the vehicles creates additional problems, the Marine Corps could be responsible for corrections. Both of these situations could lead to cost and schedule growth, and thus increase risks to the program. Several EFV performance challenges are not yet fully resolved. Specifically, a key performance parameter—interoperability—cannot be properly demonstrated until initial operational testing and evaluation in fiscal year 2010, well after low-rate initial production has begun. Interoperability means that the EFV communication system must provide essential command, control, communications, and intelligence functions for embarked personnel and EFV units. In addition, the EFV communication system must be compatible—able to communicate—with other Marine Corps systems as well as with Army, Navy, Air Force, and North Atlantic Treaty Organization systems. In order to demonstrate interoperability, the EFV must participate in operational tests that involve these joint forces. Another key performance parameter—reliability—has been problematic and still presents a significant challenge. It also is not scheduled to be demonstrated until initial operational testing and evaluation. Furthermore, the bow flap has been problematic and, while improved, still requires some design refinement and has not yet been successfully tested at its operational performance level. Program officials commented that they have developed plans to resolve remaining EFV performance challenges and are optimistic that these plans will be implemented effectively and testing successfully completed. However, there are no guarantees that this will actually happen. Consequently, the performance challenges remain risks to the program until they are fully resolved with effective solutions actually demonstrated. The EFV has encountered risks to its business case because of problems encountered in full-system testing, coupled with an SDD schedule that did not allow enough time for conducting the testing and learning from it. Using the lens of a knowledge-based business case, the start of SDD was sound on requirements and technology maturity (knowledge point 1). While design stability was judged to be attained at the critical design review (knowledge point 2) immediately after entering SDD, it appears that holding critical design review so soon was premature. The acquisition strategy did not provide the resources (time and money) necessary to demonstrate design maturity and production maturity (knowledge point 3). However, we do note that the EFV program is planning to do more with statistical process control than most other programs we have reviewed. In retrospect, the EFV program would have been more executable had the SDD phase allowed for completion of early prototype testing before holding the SDD critical design review and committing to building the SDD prototypes. Another lesson learned is that while it is necessary to demonstrate one knowledge point before a subsequent one can be demonstrated, this alone is not sufficient. Attaining one knowledge point does not guarantee the attainment of the next one. Rather, the acquisition strategy for any program must adequately provide for the attainment of each knowledge point even in programs, such as the EFV, which were in a favorable position at the start of SDD. The EFV program has put into place a number of corrective actions and plans to overcome and mitigate weaknesses in acquisition strategy. Nevertheless, design, production, and software development capability maturity have not yet been fully demonstrated and technical problems fully corrected. It is important for the business case for the EFV to remain valid in light of these changes and that the remainder of SDD adequately provide for the demonstration of design, production, and software development capability maturity before committing to production. While these problems must be acknowledged and addressed, the fact that the EFV program has had a number of sound features should not be overlooked. In this vein, the program can still be the source of lessons that DOD can apply to other programs. In particular, it is important that all of the elements of a sound business case be present at the start of SDD. While it is generally recognized that missing an early knowledge point will jeopardize the remaining ones, it must also be recognized that later knowledge points are not guaranteed even if early ones are achieved. If the acquisition strategy does not adequately provide for the attainment of all knowledge points, the estimates for cost and schedule will not have a sound basis. We are recommending that the Secretary of Defense ensure that: EFV design, production, and mature software development capabilities are demonstrated before Milestone C; adequate resources are available to cover such demonstration and provide for risks; and the business case for EFV (including cost and expected capability), after including the above, still warrants continued investment. We also recommend that the Secretary of Defense draw lessons learned from EFV and apply them to the Defense Acquisition University’s curriculum for instructing program executives, managers, and their staffs. Such lessons might include understanding that attaining one knowledge point does not guarantee the attainment of the next one; the importance of having a sound business case for each phase of development; the right time to hold a critical design review; and the importance of allowing sufficient time to learn from testing. In commenting on a draft of our report, DOD’s Acting Director for Defense Systems concurred with our recommendations. In doing so, DOD stated that the Department currently plans to assess the readiness of the EFV program for a low-rate initial production decision within a year. This assessment will review the maturity of the EFV design, including software, its production readiness for low-rate initial production, and its demonstrated capability, as well as program costs and risks. Continued investment in EFV will be based on that information. The full text of the department’s response is in appendix II. The Department notes that our best practices construct for production readiness is difficult to reconcile with its current acquisition production decision points. World class companies we have visited do, in fact, often have a limited production run that they use to manufacture a small number of production representative assets; however, they do not make a decision to invest in the tooling necessary to ramp up to full production until after those assets have been tested by the customer and their critical manufacturing processes are in control. DOD’s low-rate initial production decision reflects the decision to invest in all of the resources needed to achieve full-rate production. We believe this is too soon and that DOD would benefit from this lesson by focusing low-rate initial production on demonstrating the product and process and waiting to invest in more resources, such as tooling, to ramp up until the full-rate production decision has been made. We are sending copies of this report to the Secretary of Defense, Secretary of the Navy, and other interested parties. We will also provide copies to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-4841. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Paul L. Francis Director, Acquisition and Sourcing Management. To assess the current status of the EFV (particularly the status of the production decision), the factors that contributed to the current status, and future risks in the program, we interviewed key officials from DOD’s Director, Operational Test and Evaluation, the Office of the Secretary of Defense’s Program Analysis and Evaluation office, the U.S. Marine Corps, Isothermal Systems Research, Inc., in Washington, D.C., and the 309th Software Maintenance Group, in Ogden, Utah. We also interviewed the Direct Reporting Program Manager for the EFV and the prime contractor, General Dynamics Land Systems, in Woodbridge Virginia. We examined and analyzed pertinent program documentation, including the Selected Acquisition Reports; Test and Evaluation Master Plan; Developmental Testing Schedule; Budget Justification documents, Program Management Plan; Acquisition Strategy Plan; DOD’s Operational Testing, and Evaluation reports; Operational Requirement Documents, and the Software Development Plan. We relied on previous GAO work as a framework for knowledge-based acquisition. In addition to the contact named above, D. Catherine Baltzell, Assistant Director; Leon S. Gill; Danny Owens; Steven Stern; Martin G. Campbell; and John Krump made key contributions to this report.
The Marine Corps' Expeditionary Fighting Vehicle (EFV) is the Corps' number-one priority ground system acquisition program and accounts for 25.5 percent of the Corps' total acquisition budget for fiscal years 2006 through 2011. It will replace the current amphibious assault craft and is intended to provide significant increases in mobility, lethality, and reliability. We reviewed the program under the Comptroller General's authority to examine (1) the cost, schedule, and performance of the EFV program during system development and demonstration; (2) factors that have contributed to this performance; and (3) future risks the program faces as it approaches production. Although the EFV program had followed a knowledge-based approach early in development, its buying power has eroded during System Development and Demonstration (SDD). Since beginning this final phase of development in December 2000, cost has increased 45 percent. Unit costs have increased from $8.5 million to $12.3 million. The program schedule has grown 35 percent or 4 years, and its reliability requirement has been reduced from 70 hours of continuous operation to 43.5 hours. Program difficulties occurred in part because not enough time was allowed to demonstrate maturity of the EFV design during SDD. The SDD schedule of about 3 years proved too short to conduct all necessary planning and to incorporate the results of tests into design changes, resulting in schedule slippages. In addition, several significant technical problems surfaced, including problems with the hull electronic unit, the bow flap, and the hydraulics. Reliability also remains a challenge. Three areas of significant risk remain for demonstrating design and production maturity that have potential significant cost and schedule consequences. First, EFV plans are to enter low-rate initial production without requiring the contractor to demonstrate that the EFV's manufacturing processes are under control. Second, the EFV program will begin low-rate initial production without the knowledge that software development capabilities are sufficiently mature. Third, two key performance parameters--reliability and interoperability--are not scheduled to be demonstrated until the initial test and evaluation phase in fiscal year 2010--about 4 years after low-rate initial production has begun.
The BSA and its implementing regulations, in general, require financial institutions to maintain certain records and to file certain reports (e.g., currency transaction reports) that are useful in criminal, tax, or regulatory investigations, such as money laundering cases. Failure to file BSA reports can result in criminal and/or civil penalties, depending on the nature of the violation. Criminal investigations are the responsibility of IRS’ Criminal Investigation Division. Civil penalties are currently assessed by FinCEN, and the agency is to send each referral to IRS for review before any administrative or civil enforcement action is taken. Treasury has issued guidelines to assist regulatory agencies in determining which BSA violations warrant referral to Treasury for consideration of criminal and/or civil penalties. For example, according to the guidelines, violations customarily warranting referral include a pattern of failing to file currency transaction reports on applicable transactions. After receiving a referral, FinCEN’s role includes evaluating the circumstances of the alleged violation and determining whether some type of civil action, including seeking the imposition of a civil monetary penalty, should be taken against the person or financial institution. Generally, FinCEN disposes of the majority of its civil penalty cases with one of three courses of action: (1) close the case without contacting the subject of the referral, (2) issue a letter of warning to the subject institution or individual, or (3) assess a civil monetary penalty. The Director, FinCEN, makes the final decisions. Civil monetary penalties generally can range from $25,000 to $100,000 per willful violation. In addition, civil monetary penalties may be assessed for each negligent violation of the BSA up to $500. Appendix II provides more information about FinCEN’s procedures. Except for the delegation of responsibility to FinCEN in 1994, Treasury’s policies and procedures for processing civil penalty referrals for BSA violations generally have remained unchanged since our 1992 report. Treasury’s Office of Financial Enforcement was established in 1985 to, among other things, develop referrals of alleged civil violations of the BSA and make recommendations as to whether civil penalties should be assessed against noncompliant financial institutions and their officers, directors, employees, and individuals, and if so, the amounts of the penalties. Treasury’s Assistant Secretary for Enforcement was responsible for making the final decision to assess a penalty. In May 1994, the Assistant Secretary for Enforcement delegated civil penalty authority to FinCEN. Presently, civil penalty referrals are processed by FinCEN’s Office of Compliance and Regulatory Enforcement (OCRE). According to FinCEN, in processing civil penalty referrals, OCRE staff follow the same policies and procedures that existed before the 1994 delegation. Also, the number of staff processing civil penalty referrals has remained fairly constant, at about six, before and after the 1994 delegation of authority to FinCEN. FinCEN officials told us that the staff of Treasury’s Office of Financial Enforcement—the unit previously responsible for processing civil penalty referrals—was merged into OCRE in 1994. FinCEN officials noted, however, that none of OCRE’s six staff work on civil penalty referrals on a full-time or exclusive basis; rather, they spend about one-half of their time performing other mission functions and responsibilities. As a result of the merger, several staffing changes occurred. For example, four former Office of Financial Enforcement senior analysts who had worked on referral cases were transferred into other divisions within FinCEN, while four other FinCEN staff members, with no experience in administering the BSA, were transferred into OCRE. FinCEN officials told us that there have been several personnel departures during the past year, which have affected the management and expertise in this area. For example, OCRE’s chief and deputy chief left the agency. As of May 1998, these positions were still vacant. In the past, civil penalty cases have not been processed in a timely manner. That was the conclusion we reached in our 1992 report, which analyzed Treasury’s case inventories between 1985 and 1991. Our current work, which analyzed case inventory data provided by FinCEN for 1992 through 1997, shows that the problem of lengthy processing times is growing worse. For the period 1985 through 1997, data from Treasury’s Office of Financial Enforcement and/or FinCEN showed a total of 648 closed civil penalty cases. Of this total, 430 cases were closed during 1985 through 1991 (a 7-year period), and the remaining 218 cases were closed during 1992 through 1997 (a 6-year period). Our analyses show that relatively few cases have been closed in recent years, particularly after 1994. Case closures in each of the 3 most recent years, 1995 through 1997, dropped below 30 for the first time since 1985 (see fig. 1). Civil penalty cases closed represented 22 percent, 8 percent, and 13 percent, respectively, of FinCEN’s annual workloads in 1995, 1996, and 1997 (see fig. 2). During each of these 3 years, the number of cases closed was fewer than the number of referrals received, which represented a reversal of the trend in 1990 through 1994 (see fig. 3). For example, in 1997, 19 cases were closed while 34 referrals were received. In contrast, in 1990, 103 cases were closed while 65 referrals were received. For 1985 through 1991, Treasury’s data show that the average processing time to close a case was 1.77 years. Processing times for the 430 cases closed during this 7-year period ranged from 4 days to 6.44 years. According to FinCEN’s data, the processing times have slowed during the more recent period, 1992 through 1997 (see fig. 4). Specifically, the average processing time to close a case was 3.02 years. Processing times for the 218 cases closed during this period ranged from 8 days to 10.14 years. For cases closed in each of the 4 most recent years, 1994 through 1997, figure 4 shows that average processing times were 3 years or higher, a threshold not reached in any of the previous years. “Officials at 2 of... agencies...told us that they believed—although it could not be proved or measured—that the lengthy processing times resulted in a decrease in enforcement efforts.... “We think it would be reasonable to assume that the effectiveness of any penalty as a deterrent to prevent future violations would be directly related to the length of time between the violation and the action taken. Given this assumption, lengthy processing times for civil penalty referrals could affect compliance with the Bank Secrecy Act. “Perhaps the most serious result of civil penalty cases remaining inactive for lengthy periods of time can be the expiration of the statute of limitations....” According to FinCEN’s data for the period January 1, 1992, through March 27, 1998, a total of 16 cases had one or more BSA violations that could not be pursued because the statute of limitations had expired. Our 1992 report, which analyzed civil penalty case inventories between 1985 and 1991, concluded that cases had not been processed in a timely manner. More recently, as shown in figure 4, the average processing times for civil penalty cases closed since 1994 are higher than the average times for previous years. There may be several reasons for this trend. Regarding recent years, for example, FinCEN officials mentioned staff inexperience and personnel departures as being reasons. Further, the officials noted a change in the kinds of cases being referred to FinCEN. Specifically, the officials said the majority of cases referred to FinCEN now involve nonbank financial institutions (i.e., casinos, check cashers, and currency exchangers). According to FinCEN officials, it generally is more difficult to obtain records and documentary evidence and to reconstruct transactions for these entities than for banks. In addition, we believe that insufficient management attention has been a significant cause of the lengthy processing times for civil penalty cases. First, FinCEN and its predecessor, Treasury’s Office of Financial Enforcement, did not (1) set timeliness goals for civil penalty case processing and (2) monitor or measure performance against those goals. FinCEN officials told us that the agency has never set timeliness goals for civil penalty processing. The officials also said that any such goals would prove arbitrary since each case varies significantly based on complexity, volume of transactions, and other factors. However, those goals can be valuable performance management tools for improving overall results and can take into account the differences in cases. Moreover, goal setting and performance measurement are widely considered to be good management practices, and these practices are reflected in the Government Performance and Results Act of 1993. Implementing such practices should help FinCEN (1) better identify the key factors that determine the timeliness of processing civil penalty cases and (2) find ways to streamline the management and processing of cases to reverse the trend of increasingly lengthy processing times. Second, FinCEN’s civil penalty tracking system, which resides on a stand-alone microcomputer, has not been an effective management tool, according to a 1990 report by Treasury’s Inspector General. Generally, the tracking system has remained unchanged since 1990, even though the Inspector General reported that database improvements were needed to assist in prioritizing, managing, and controlling civil penalty cases. The Inspector General’s report noted, for example, that the database was not being used to track the age of referrals and cases nor to track statute of limitation expiration dates. Third, as previously mentioned, according to FinCEN’s data for the period January 1, 1992, through March 27, 1998, a total of 16 cases were affected by expiration of the statute of limitations. However, FinCEN did not close several of these cases until months or years after expiration of the statute of limitations. In fact, since our inquires about the status of case processing, FinCEN has closed 15 of these 16 cases involving expiration of the statute of limitations. For example, FinCEN’s data for the 16 cases show the following. One case had a statute of limitations expiration date in 1993, but FinCEN did not close the case until November 1995. Two cases had statute of limitations expiration dates in 1995, and FinCEN closed one case in February 1998 and one case in March 1998. Five cases had statute of limitations expiration dates in 1996, but FinCEN did not close the cases until February 1998. Four cases had statute of limitations expiration dates in 1997, and FinCEN closed two cases in February 1998 and the other two cases in March 1998. Four cases had statute of limitations expiration dates in either January 1998 or February 1998, and FinCEN closed one case in February 1998 and the other three cases in March 1998. Section 406 of the MLSA directed the Secretary of the Treasury to delegate to appropriate federal banking regulatory agencies the authority to assess civil penalties for BSA violations. This statutory section further specified that the Secretary shall prescribe by regulation the terms and conditions that shall apply to any such delegation. The intent of such delegation, as described in the MLSA’s conference report, is to increase efficiency by allowing the federal banking agencies to impose civil penalties directly rather than to make referrals to FinCEN. The conference report also noted that, after the delegation, FinCEN “would still be able to oversee the process and ensure that penalties are consistently imposed.” In February 1998, we reported to the Subcommittee that a notice of proposed rulemaking still had not been issued, and FinCEN had not established a projected issuance date. In April 1998, a senior FinCEN official provided us the status of the agency’s efforts substantially as follows: FinCEN has had numerous meetings with federal bank regulators to begin the process of delegating some or all of FinCEN’s civil penalty enforcement authority. Much progress has been made, but some serious issues are unresolved. One issue is whether violations will be enforced under BSA provisions or under the bank regulators’ general examination powers granted by Title 12 of the U.S. Code. According to FinCEN, the bank regulators may be less inclined to assess BSA penalties and may instead use their non-BSA authorities under the general examination powers of Title 12. FinCEN prefers that the BSA provisions be used to ensure consistency of interpretation and sanctions for similar violations. Another issue involves oversight or monitoring by FinCEN. The details of Treasury’s continued oversight responsibility for BSA penalties, even after the delegation, have not yet been worked out. Further, while not required by the MLSA, FinCEN is studying the possibility of also delegating BSA civil penalty authority to IRS, which conducts BSA compliance examinations of nonbank financial institutions. FinCEN and IRS have engaged in several discussions concerning such a delegation. As a result, IRS is currently studying the relevant policy and resource considerations. FinCEN’s current strategic plan indicates that delegation of civil penalty authority to the banking regulatory agencies may not occur before 2002. Except for the delegation of civil penalty authority to FinCEN in 1994, Treasury’s policies and procedures for processing civil penalty referrals for BSA violations generally have not changed since our 1992 report. Also, the number of staff processing civil penalty referrals has remained fairly constant, at about six, before and after the May 1994 delegation to FinCEN. However, FinCEN officials noted that over the past year, personnel departures—including OCRE’s chief and deputy chief—have affected management and expertise in this area. As of May 1998, these positions remained vacant. The problem of lengthy processing times for civil penalty cases has grown worse since our 1992 report. Overall, FinCEN’s data showed a smaller percentage of civil penalty cases being closed between 1992 and 1997 than 1985 and 1991, and the annual workload was smaller during the more recent years. Also, in the more recent years, the average processing times to close civil penalty cases are higher than in previous years. Among other reasons, insufficient management attention—as indicated by the absence of timeliness goals and monitoring, ineffective civil penalty tracking system, and 16 cases that could not be pursued because the statute of limitations had expired—contributed to lengthy processing times in recent years. Goal setting and performance measurement are widely considered to be good management practices, and implementing such practices may help FinCEN focus its attention on better managing and processing civil penalty cases and reverse the trend of increasingly lengthy processing times. FinCEN’s current strategic plan indicates that delegation of civil penalty authority to federal banking regulatory agencies may not occur for another 3 or 4 years. Pending such delegation, FinCEN is still responsible for processing civil penalties. To reduce the lengthy processing times associated with civil penalties, we recommend that the Acting Director, FinCEN, set average timeliness goals for evaluating and disposing of civil penalty cases, taking into account the varying complexity of the cases, and monitor the progress of managers and staff responsible for meeting those goals. We recognize that setting timeliness goals, by themselves, may not necessarily lead FinCEN to resolve all the problems that may have contributed to the lengthy processing times for evaluating and disposing of civil penalty cases. However, setting and managing to meet such goals should help FinCEN better focus its attention on processing civil penalty cases and provide a means to determine what corrective actions might be needed to decrease processing times in the future. In a letter dated May 20, 1998, FinCEN’s Acting Director provided written comments on a draft of this report (see app. IV). The Acting Director concurred that greater or more diligent management oversight is needed to ensure that civil penalty cases are processed in an expeditious, yet thorough manner. To address the timeliness issue, the Acting Director noted that FinCEN has taken or has plans to take definitive steps, such as working with OCRE staff to identify individual training needs; assigning two non-OCRE employees the tasks of analyzing open civil penalty referrals, highlighting cases that warrant immediate attention, and providing oversight to ensure that progress continues on those referrals; and developing civil penalty referral procedures that include time lines and due dates. The Acting Director commented that FinCEN plans to establish strict time lines for the initial assessment of civil penalty referrals, where such guidelines are practicable and predictable. Regarding the adjudicative or disposition phase of case processing, the Acting Director said that FinCEN favored more diligent management oversight (e.g., case reviews by the OCRE Assistant Director) rather than the establishment of strict or arbitrary time lines. However, to provide further management oversight, the Acting Director said that FinCEN had recently reinstated the use of quarterly reports showing the status of BSA referrals, including the number of cases received and closed during the reporting period. Moreover, we note that, at the April 1, 1998, hearing held by this Subcommittee, FinCEN agreed to provide quarterly reports to the Subcommittee. Generally, if they are fully implemented, we believe that the various steps or initiatives presented by the Acting Director collectively meet the substantive intent of our recommendation. As this report indicates, our principal concern is that insufficient management attention has been a significant cause of the lengthy processing time for civil penalty cases. Agency recognition of the need for greater or more diligent management oversight, including the use of timeliness goals, is a key to corrective action. Nonetheless, we still believe that FinCEN should consider opportunities for using timeliness goals as guides for managing and monitoring all phases of civil case processing, not just the initial case assessment phase. We are sending copies of this report to the Subcommittee’s Ranking Minority Member; the Chairman and Ranking Minority Member, House Committee on Banking and Financial Services; the Secretary of the Treasury; the Acting Director, FinCEN; and other interested parties. We will also make copies available to others on request. Major contributors to this report are listed in appendix V. Please contact me on (202) 512-8777 if you or your staff have any questions. The Chairman, Subcommittee on General Oversight and Investigations, House Committee on Banking and Financial Services, asked us for information regarding efforts of the Treasury Department’s Financial Crimes Enforcement Network (FinCEN) to process civil penalty referrals for violations of the Bank Secrecy Act (BSA). Generally, the request involved two objectives. The first was to update the BSA civil penalty case inventory and processing timeliness statistics that we presented in our 1992 report. The second was to determine the status of FinCEN’s efforts regarding a provision of the Money Laundering Suppression Act of 1994 (MLSA). More specifically, as agreed with the Chairman’s office, we focused our work on the following questions: How, if at all, has Treasury changed its policies and procedures for processing civil penalty cases since 1992? Based upon workload and related statistics, what was Treasury’s performance in processing civil penalty cases during calendar years 1992 through 1997? What is the status of FinCEN’s efforts to develop and issue a final regulation delegating the authority to assess civil penalties for BSA violations to the federal banking regulatory agencies, as required by the MLSA? Preliminarily, in addressing the Chairman’s request, we reviewed our February 1992 report and our subsequent congressional testimony in June 1992 before the Subcommittee on Oversight, House Committee on Ways and Means. Also, we reviewed a relevant 1990 report by Treasury’s Inspector General. In response to our inquiry, FinCEN officials told us that our 1992 report and the Inspector General’s 1990 report were the only previous studies conducted of BSA civil penalty processing. To address the first question, we interviewed officials in FinCEN’s Office of Compliance and Regulatory Enforcement (OCRE), and we reviewed relevant documentation on policies and procedures (see app. II). Also, we reviewed guidelines that Treasury issued to assist regulatory agencies in determining which BSA violations warranted referral for possible assessment of civil penalties. Further, we obtained information about the number of OCRE staff involved in processing BSA civil penalty cases. Regarding Treasury’s performance in processing BSA civil penalty cases, we reviewed and compared data for two time periods covering a total of 13 years—(1) calendar years 1985 through 1991 and (2) calendar years 1992 through 1997. In so doing, we developed statistical tables showing annual workload (i.e., beginning inventory plus referrals received), cases closed, processing times, closures by type of action taken, penalty dollar amounts, and referral sources (see app. III). For the more recent (1992 through 1997) of the two time periods, we selectively verified the data that FinCEN provided to us from its computerized civil penalty tracking system. Specifically, we judgmentally selected and reviewed 15 percent of the cases that were closed by type of action taken in this time period. In our judgmental selections, we included cases representing all three types of case-closure dispositions—(1) cases closed with no contact, (2) cases closed with a letter of warning, and (3) cases closed with a monetary penalty assessed. For each of the selected cases, we reviewed OCRE’s hard copy case files to verify that applicable data had been accurately input into the computerized civil penalty tracking system. Further, we checked the accuracy of the specific query statements that OCRE used in providing us requested data from the computerized civil penalty tracking system. Our verification efforts found three minor discrepancies in the data contained in FinCEN’s civil penalty tracking system. The correction of these discrepancies did not change the results of our analysis. Also, according to FinCEN, a total of 16 cases during the period January 1, 1992, through March 27, 1998, were affected by expiration of the statute of limitations. We did not independently verify this total nor did we analyze these cases. Regarding the last question (delegation of civil penalty authority), we interviewed FinCEN officials to update the status of information presented in our February 1998 report to the Subcommittee’s Chairman and Ranking April 1998 testimony at a hearing held by the Subcommittee. Also, we reviewed FinCEN’s multiyear strategic plan, which briefly discusses the delegation issue. In May 1994, Treasury’s Assistant Secretary for Enforcement delegated BSA civil penalty authority to FinCEN. As a result, the Director of FinCEN is responsible for assessing civil penalties for BSA violations by banks and by certain nonbank financial institutions. Following is a description of the process of identifying and assessing penalties for the violations. FinCEN does not conduct BSA compliance examinations at either banks or nonbank financial institutions. Rather, such examinations are conducted by the following agencies: Compliance examinations of “banks” are conducted by the five federal bank supervisory or regulatory agencies: the Federal Deposit Insurance Corporation (FDIC), the Board of Governors of the Federal Reserve System (FRS), the Office of the Comptroller of the Currency (OCC), the Office of Thrift Supervision (OTS), and the National Credit Union Administration (NCUA). IRS’ Examination Division conducts compliance examinations of nonbanks. This category includes casinos; money transmitters; check cashers; currency exchangers; security brokers and dealers; issuers or redeemers of money orders, traveler’s checks, and other similar instruments; and individuals who attempt to evade the BSA’s reporting requirements. Securities and Exchange Commission (SEC) conducts compliance examinations of securities brokers and dealers. If warranted by the results of their examinations, these agencies refer their preliminary findings to FinCEN for appropriate action and disposition. According to FinCEN officials, in addition to BSA violations referred by the various federal agencies, FinCEN also initiates civil investigations based on other sources, such as (1) voluntary disclosures from financial institutions or individuals; (2) formal advisories from IRS’ Detroit Computing Center, which processes currency transaction reports and other BSA-related information; and (3) reports of investigations from state and local law enforcement agencies. Initially, before any administrative or civil enforcement action is taken, FinCEN’s procedures call for sending each incoming matter to IRS’ Criminal Investigation Division for review of criminal potential. According to specified procedures, FinCEN should not proceed with evaluating a BSA civil penalty referral until IRS or a U.S. Attorney’s Office provides written approval for such action. By agreement, IRS has 120 days to complete its review of the matter. FinCEN officials told us that FinCEN generally receives a clearance to proceed within 30 days. Further, the officials noted that, in cases requiring special or expeditious attention, FinCEN contacts a designated IRS official by telephone to obtain clearance from the Criminal Investigation Division. After FinCEN receives a clearance from IRS, the matter is assigned a formal case number and given to a financial enforcement specialist within FinCEN’s OCRE. The duties of the financial enforcement specialist are to conduct a preliminary review of the information presented in the referral and, if needed, to contact other sources to develop further information on the circumstances of the violation and/or the subject of the referral. According to FinCEN, these sources may include one or more of the following: the law enforcement or regulatory agency that discovered and referred the alleged BSA violations to FinCEN, any law enforcement or regulatory authority that has jurisdictional concerns or relevant information on the subject, the financial institution’s primary regulator, IRS’ Detroit Computing Center (to obtain BSA records and background on the subject), the local U.S. Attorney’s Office or IRS office, and the financial institution or person who is the subject of the alleged BSA violations. According to FinCEN, the financial enforcement specialist is to consider the results of any internal or external audits, any corrective action taken, the institution’s written compliance program and training and instructional materials, and other information relevant to the questioned transactions. Also, FinCEN noted that, to obtain a fuller perspective on the alleged BSA violation, the specialist may ask for and review relevant information that goes beyond just the specific transactions cited in the referral. That is, the specialist may review other account and transaction activity information regarding the subject institution or individual. On the basis of the information in the referral and that developed by OCRE, the financial enforcement specialist is to recommend a course of civil or administrative action to the Assistant Director, OCRE, who reviews and decides whether to approve the recommended action. The case is disposed of with one or a combination of the following administrative or civil actions. The federal regulator may issue a cease and desist order or other sanction. The subject financial institution may make corrections to any deficient BSA systems and/or backfile any delinquent BSA reports. FinCEN may close the case without further action or contact with the subject institution or individual. FinCEN may issue a letter of warning. FinCEN may assess a civil monetary penalty. FinCEN officials told us that, if a civil monetary penalty seems appropriate, FinCEN grants the subject institution or individual an opportunity to dispute the allegations and offer a defense of the alleged actions. The officials added that financial institutions and individuals are encouraged to submit any available mitigating evidence in advance of BSA case settlement negotiations with FinCEN. Also, the officials noted that FinCEN’s final disposition of a BSA case, including the dollar amount of the civil penalty, is to be determined by considering the following factors: the severity, volume, and longevity of the BSA violations; the subject’s overall BSA compliance program; self discovery and acknowledgment of the BSA violations to Treasury versus external discovery and notification; cooperation with FinCEN and other applicable agencies; prompt correction of the BSA deficiencies that caused the violations; the outcomes of any prior or subsequent BSA compliance examinations; and any other valid aggravating or mitigating factors, including the subject’s ability to pay the BSA penalty. According to FinCEN officials, due to the complex nature of BSA cases, FinCEN does not use rigid formulas to determine the appropriate BSA penalty. Rather, all such decisions are to be made on a case-by-case basis and are to reflect consideration of the factors presented above. Also, FinCEN officials noted that the agency does not set timeliness goals for processing civil penalty cases. According to FinCEN, if a subject refuses to settle the case, FinCEN formally assesses the maximum BSA civil monetary penalty allowed by law for the violations. The matter is then to be referred for internal legal review. Thereafter, if deemed warranted, procedures call for FinCEN to submit the matter to the Department of Justice’s Civil Division to seek collection of the unpaid penalty. After FinCEN assesses a BSA civil penalty, the government has 2 years to initiate collection litigation against the subject. FinCEN officials told us that, to avoid litigation and exposure to the maximum penalty allowed by law, subjects of a BSA action are almost always amenable to settling their BSA liability with FinCEN. This appendix presents various tables of BSA penalty statistics for calendar years 1985 through 1997. More specifically, the tables show annual workload (i.e., beginning inventory plus referrals received) and cases closed (table III.1); processing times (tables III.2, III.3, and III.4); closures by type of action taken (table III.5); penalty dollar amounts (table III.6); and referral sources (table III.7). Cases closed as a percentage of annual workload 648 Average time to close case (in years) Table III.3: Processing Times (by Time Period) for the 648 Civil Penalty Cases That Were Closed During Calendar Years 1985 Through 1997 Table III.4: Average and Range of Processing Times by Type of Action Taken for the 648 Civil Penalty Cases That Were Closed During Calendar Years 1985 Through 1997 Average (in years) Average (in years) Average (in years) Geoffrey R. Hamilton, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of the Treasury's Financial Crimes Enforcement Network's (FinCEN) efforts to process civil penalty referrals for violations of the Bank Secrecy Act (BSA), focusing on: (1) whether Treasury has changed its policies and procedures for processing civil penalty cases since 1992; (2) Treasury's performance in processing civil penalty cases during calender years 1992 through 1997; and (3) the status of FinCEN's efforts to develop and issue a final regulation delegating the authority to assess civil penalties for BSA violations to the federal banking regulatory agencies, as required by the Money Laundering Suppression Act. GAO noted that: (1) except for the May 1994 delegation to FinCEN, Treasury's policies and procedures for processing civil penalty cases generally have not changed since 1992; (2) also, the number of staff processing civil penalty cases has remained fairly constant, at about six, before and after the May 1994 delegation to FinCEN; (3) the problem of lengthy processing times for civil penalty cases is growing worse; (4) for example, according to FinCEN's data for cases closed in calendar years 1985 through 1991, the average processing time to close a case was 1.77 years, and the most lengthy time was 6.44 years; (5) in comparison, FinCEN's data for calendar years 1992 through 1997 indicate an average processing time was 3.02 years, and the most lengthy time was 10.14 years; (6) for cases closed in the 2 most recent years, 1996 and 1997, the average processing times were 3.57 years and 4.23 years, respectively; (7) lengthy processing can negatively affect the public's perception of the government's efforts to enforce the BSA, thereby lessening the credibility and deterrent effects of the act's provisions; (8) another result is that the 6-year statute of limitations for BSA civil penalties could expire; (9) according to FinCEN's data, for the period January 1, 1992, through March 27, 1998, a total of 16 cases had one or more BSA violations that could not be pursued because the statute of limitations had expired; (10) insufficient management attention is a significant cause of the lengthy processing times for civil penalty cases; (11) FinCEN officials told GAO, for example, that the agency has never set timeliness goals for processing civil penalty cases; (12) FinCEN has issued neither a notice of proposed rulemaking nor a final regulation to delegate civil penalty assessment authority to the banking regulatory agencies; (13) FinCEN officials told GAO they have been working with the federal banking regulatory agencies for some time to devise an appropriate plan for delegating civil penalty assessment authority, but some issues still required resolution; (14) FinCEN's current strategic plan indicates that such delegation may not occur before 2002; and (15) for several more years, FinCEN could still be responsible for processing civil penalty referrals.
Most states and counties provide some child welfare services directly and provide others through contracts with private agencies, where caseworkers provide residential treatment and family support services as well as reunification and adoption services. The role and level of assistance that private child welfare agencies provide varies by state, though in Illinois for example, approximately 80 percent of child welfare services is reported to be provided through the private sector. Although public and private child welfare agencies face different financial constraints and use different personnel guidelines, national survey data confirm that both state and private child welfare agencies are experiencing similar challenges recruiting and retaining qualified caseworkers. For instance, turnover of child welfare staff—which affects both recruitment and retention efforts—has been estimated at between 30 percent and 40 percent annually nationwide, with the average tenure for child welfare workers being less than 2 years. Evidence from a national child welfare workforce study indicates that fewer than 15 percent of child welfare agencies require caseworkers to hold either bachelors or masters degrees in social work, despite several studies finding that Bachelor’s of Social Work (BSW) and Master’s of Social Work (MSW) degrees correlate with higher job performance and lower turnover rates among caseworkers. Further evidence suggests that the majority of credentialed social workers are not employed in child or family service professions; instead, they choose professions in mental health, substance abuse prevention, rehabilitation, and gerontology. Nevertheless, child welfare caseworkers, assisted by their supervisors, are at the core of the child welfare system, investigating reports of abuse and neglect; coordinating substance abuse, mental health, or supplemental services to keep families intact and prevent the need for foster care; and arranging permanent or adoptive placements when children must be removed from their homes. In some agencies, caseworkers perform multiple functions from intake to placement on any given case; in others, they are specialized in areas such as investigations, reunification/family preservation, and adoptions. The primary role of supervisors is to help caseworkers perform these functions, thereby meeting the needs of families and carrying out the agency’s mission. Some functions of the child welfare supervisor include assigning cases, monitoring caseworkers’ progress in achieving desired outcomes, providing feedback to caseworkers in order to help develop their skills, supporting the emotional needs of caseworkers, analyzing and addressing problems, and making decisions about cases. In addition, given the challenges agencies face in recruiting and retaining child welfare workers, some supervisors provide direct assistance to caseworkers by taking on some of their cases. The federal government’s primary connection to the child welfare workforce has been through its funding of child welfare training programs as they relate to the provision of child welfare services. ACF at HHS is responsible for the administration and oversight of the approximately $7 billion in federal funding allocated to states for child welfare services. As part of this allocation, ACF provides matching funds for the training and development of child welfare caseworkers through Title IV-E of the Social Security Act. Title IV-E authorizes partial federal reimbursement— 75 percent—of states’ training funds to implement training programs for current child welfare staff and to enhance the child welfare curriculum of undergraduate and graduate social work programs to better educate and prepare potential caseworkers. This funding may also be used for curriculum development, materials and books, support for current workers to obtain a social work degree, and incentives to induce entry to the child welfare field. During fiscal year 2001, 49 states received $276 million in Title IV-E training reimbursements. These reimbursements ranged from a low of approximately $1,400 in Wyoming to a high of more than $59 million in California, with the median reimbursement approximating $3.1 million. In addition, ACF’s Children’s Bureau manages six discretionary grant programs through which it funds various activities related to improvements in the child welfare system. Each of these programs receives a separate annual appropriation from the Congress. One of these programs—the Child Welfare Training Program, authorized by Section 426 of Title IV of the Social Security Act—awards grants to public and private nonprofit institutions of higher learning to develop and improve the education, training, and resources available for child welfare service providers. This is the only program of the six with a specific emphasis on staff training; however, in fiscal year 2002, it received the second smallest share—9 percent—of the Children’s Bureau’s total discretionary funds (see fig. 1). In 2000, ACF began a new federal review system to monitor states’ compliance with federal child welfare laws. Under this system, ACF conducts CFSRs, assessing states’ performance in achieving the goals of safety, permanency, and child and family well-being—three goals emphasized in ASFA. The CFSR process involves a state self-assessment and an on-site review by a joint team of federal and state officials to assess states’ performance on assessment measures such as timely investigations of maltreatment and caseworker visits with families. States that have not met the standards are required to develop a program improvement plan (PIP) and can face the withholding of federal funds should they fail to develop a plan or fail to take the specified corrective actions. As of December 1, 2002, ACF had completed and documented its reviews for 27 states. In addition to these reviews, ACF provides assistance to states via its 10 resource centers, all of which have different areas of expertise, such as organizational improvement, legal and judicial guidance, and child welfare information technology. The primary goal of these centers is to help states implement federal legislation intended to ensure the safety, well-being, and permanency of children who enter the child welfare system, to support statutorily mandated programs, and to provide services to discretionary grant recipients. These centers conduct needs assessments, sponsor national conference calls with states, collaborate with other resource centers and agencies, and provide on-site technical assistance and training to states. States may request specific assistance from the centers; however, ACF sets the centers’ areas of focus and priorities, and no one center focuses specifically on recruitment and retention issues at this time. Figure 2 shows the major channels through which federal dollars can be used for staff development. Members of the current and previous Congress have introduced proposals to expand federal funding to combat the recruitment and retention challenges that child welfare agencies face. As of March 26, 2003, the Congress was considering H.R. 14 and S. 342, each named the “Keeping Children and Families Safe Act of 2003,” which contain provisions to improve the training of supervisory and nonsupervisory workers; improve public education relating to the role and responsibilities of the child protective system; and provide procedures for improving the training, retention, and supervision of caseworkers. The Congress is also currently considering S. 409 and H.R. 734, bills that would provide federal loan forgiveness to social workers who work for child protective agencies and have obtained their bachelor’s or master’s degrees in social work. As a tool to increase retention, both of these bills tie education loan repayment to tenure, such that the longer the caseworker remains with the agency, the greater the share of the loan that is repaid. These bills would apply to caseworkers in public and private child welfare agencies operating under contract with the state. Child welfare agencies face a number of challenges recruiting and retaining workers and supervisors. Public and private agency officials in all four of the states we visited struggled to provide salaries competitive with those in comparable fields, such as teaching. According to these officials, they lose both current workers and potential hires to these fields, which pay higher wages and offer safer and more predictable work environments. National salary data, though somewhat broad in how it defines certain occupations, confirm that child and family caseworkers earn less than educators. Specifically, one county official in Texas said that teachers now earn starting salaries of about $37,000 while entry-level caseworkers earn about $28,000 annually, a difference of about 32 percent. Caseworkers we interviewed in each state also cited administrative burdens, such as increased paperwork requirements for each child in a case; a lack of supervisory support; and insufficient time to participate in training as issues impacting both their ability to work effectively and their decision to stay in the child welfare profession. These issues were mentioned by both public and private agency staff in all four states, where some caseworkers handled double the number of cases recommended by independent child welfare organizations. Former child welfare workers also identified these issues in exit interview documents we reviewed. In addition to retirement and other personal reasons staff chose to leave their positions, low salaries and high caseloads were among the factors affecting child welfare workers’ decisions to sever their employment. Public and private agencies we visited in all four states struggled to provide salaries competitive with those in comparable occupations and encountered difficulty retaining staff due to salary gaps within the profession of child welfare. According to our analysis of 585 exit interviews completed by staff who severed their employment, 81 cited low pay as one of their reasons for leaving. In addition, according to agency officials in all four states, they consistently lose both current workers and potential hires to higher-paying professions, such as teaching. The Bureau of Labor Statistics’ national wages survey reports that elementary and middle school teachers earn, on average, about $42,000 annually while social workers earn about $33,000. Furthermore, one California private agency reported that foster care caseworkers with MSWs who worked in group residential care facilities, which provide structured living arrangements and treatment services for children with complex needs, earned from $5,000 to $30,000 less than school counselors, nurses, and medical and public health social workers. Other states also report significant wage disparities within the child welfare profession. One study in South Carolina found that salaries for public agency caseworkers were almost double those of direct care workers in private agency residential programs. Additionally, according to labor union representatives in Illinois, public agency caseworkers there earn considerably more than staff in private child welfare agencies, and union officials at the national level attribute this wage gap to their lobbying efforts. In addition, low salaries—because they often contribute to limited applicant pools—can make it particularly difficult for agencies to recruit child welfare staff in certain geographical areas and to serve bilingual clients. For example, a New York State study of turnover among caseworkers from January to December 2001 shows that small counties near cities, in particular, have more difficulty recruiting staff because of higher salaries in surrounding areas. Additionally, in Texas for example, officials said that counties in rural areas with larger Spanish-speaking and Native American populations do not pay adequate salaries to successfully recruit qualified bilingual staff or staff who are sensitive to local cultures. State officials in Illinois and California echoed these concerns. Furthermore, according to public agency caseworkers in Texas, their salaries do not reflect the risks to personal safety they face as part of their work. These caseworkers told us that given the safety risks they are exposed to daily, they should be given hazardous duty pay similar to workers in other high-risk professions. According to a national study by the American Federation of State, County, and Municipal Employees (AFSCME), a union representing primarily government employees including child welfare caseworkers throughout the country, caseworkers routinely deal with high levels of risk. Specifically, AFSCME researchers found that more than 70 percent of front-line caseworkers had been victims of violence or threats of violence in the line of duty. In addition, in a peer exit interview process conducted in one state we visited, 90 percent of its child protective services employees reported that they had experienced verbal threats; 30 percent experienced physical attacks; and 13 percent were threatened with weapons. Although many of the caseworkers and supervisors we interviewed in each state told us they were motivated by their desire to help people, protect children, work with families, and potentially save lives, they also told us that workplace issues such as high caseloads, administrative burdens, limited supervision, and insufficient time to participate in training reduce the appeal of child welfare work, making it difficult for staff to stay in their positions. In each of the four states we visited, the agency’s inability to retain staff has contributed to existing unmanageable caseloads. CWLA suggests a caseload ratio of 12 to 15 children per caseworker, and COA suggests that caseloads not exceed 18 children per caseworker. However, in its May 2001 report, the American Public Human Services Association (APHSA) reported that caseloads for individual child welfare workers ranged from 10 to 110 children, with workers handling an average of about 24 to 31 children each (see fig. 3). Managers we interviewed in California confirmed this, stating that caseworkers often handle double the recommended number of cases. Furthermore, caseworkers and supervisors we interviewed in the four states we visited told us that heavy workloads encourage workers to leave for other careers that they perceive as requiring less time and energy. For instance, caseworkers in Texas told us that former co-workers left the field to go into teaching, in part, because of the more appealing work schedule, including seemingly shorter hours and holidays and summers off. Also, caseworkers in all states we visited emphasized concerns about the increasing complexity of cases—more cases involve drug and alcohol abuse and special needs children, in particular. In the exit interview documents we reviewed, 86 out of 585 child welfare workers identified high caseloads as a factor influencing their decision to leave. One former private agency caseworker in Delaware reported in an exit interview that, although caseloads were manageable, the complexity of each case was a problem. In addition, one former county worker in California said that cases are becoming increasingly difficult, and caseworkers are no longer able to do “social work.” This caseworker also said that the amount of work and stress is endless and limits the amount of time she has to perform her job well. Furthermore, caseworkers and supervisors in the four states we visited told us that overwhelming administrative burdens, such as paperwork, take up a large portion of their time, with some estimating between 50 percent and 80 percent. Some also said that these administrative burdens were factors influencing their decisions to seek other types of employment. According to two labor union representatives in California, caseworkers often have to work overtime to complete their paperwork, but instead of being compensated in salary for their overtime, they are given days off. The representatives said, however, that many caseworkers could not afford to take time off because paperwork continues to mount in their absence. Caseworkers in Illinois, for example, told us that they are required to complete more than 150 forms per child in their caseload. Such requirements are multiplied as caseloads increase. One study of the child welfare system reported that part of the administrative burden child welfare workers face also stems from the time they must spend in court as a result of requirements of ASFA. The authors said that child welfare workers frequently mentioned that the earlier and more frequent court hearings that ASFA requires mean additional responsibilities for them. Furthermore, in exit interview documentation we reviewed, workers expressed frustration with these burdens, with some saying that they spent insufficient amounts of time with families due to paperwork, in particular, and that more clerical staff is needed to assist with documentation. One caseworker in a California county indicated that more than 80 percent of her job was administrative and that it was impossible to meet all administrative requirements and do a quality job at the same time. Officials and caseworkers in all of the states we visited also expressed concerns about the quality of supervision, with most indicating that supervisory support either motivated caseworkers to stay despite the stress and frustration of the job or that lack of supervisory support was a critical factor in their decision to leave. Although challenging, two critical functions of child welfare supervisors are to recognize and respond to the needs and concerns of caseworkers and to provide them with direction and guidance. However, caseworkers we visited said that their supervisors are often too busy to provide the level of supervision needed. In Kentucky, workers told us that the inaccessibility of their supervisors negatively impacted their effectiveness and morale. Furthermore, one Texas state official told us that because of high turnover, caseworkers with only 3 years of experience are commonly promoted to supervisory positions. According to tenured supervisors there, this advanced promotion track has caused additional problems. Some newly promoted supervisors have requested demotions because they feel unprepared for the job requirements, and the caseworkers they supervise have complained of poor management and insufficient support. Our analysis of exit interview documents revealed that inadequate supervision was not among the top five reasons caseworkers gave for leaving, but some caseworkers (about 7 percent) cited it as an area of concern. One former county caseworker in Pennsylvania, who had been with the agency for 3 years, reported that her supervisor lacked both leadership qualities and experience. Additionally, one private agency caseworker in Wisconsin, who had left the agency after just 6 months, reported in her exit interview that mentors were good when they were available, but they were often unavailable due to work demands. She also reported that mentorship becomes even more difficult when a group of new caseworkers completes training at the same time, suggesting a lack of tenured staff interested or available to provide such on-the-job guidance. Furthermore, a former caseworker in Arizona reported that communications with her supervisor were mainly through electronic mail—seldom in person. Finally, a former private agency caseworker in Maine said that most interactions with her supervisor seemed punitive rather than educational or supportive in nature. Agency and supervisory support can mitigate the stress of the job and the workload, according to some studies. For example, one California county’s workforce analysis stated that competent and supportive supervision was critical to reducing staff turnover. Another California study—in a county where most caseworkers indicated that they were satisfied with their jobs—reported that these caseworkers rated their relationship with supervisors as one of the most satisfying factors of their work, giving supervisors very high ratings for their effectiveness, personal skills, and ability to help workers collaborate. In addition to their concerns about supervision, caseworkers and supervisors in all four states consistently told us that insufficient training poses a recruitment and retention challenge to their agencies. Specifically, they told us that training opportunities were often inadequate to ensure a smooth transition for new recruits into the agency. Despite the fact that public agencies in all four states had both minimum requirements for training new hires and ongoing training for senior workers, some caseworkers said that basic training does not provide new staff with the skills they need to do their jobs. Additionally, they told us that with high caseloads and work priorities, neither supervisors nor tenured staff are able to conduct on-the-job training to compensate. In one urban Texas region, for example, caseworkers told us that new hires are typically assigned between 40 and 60 cases within their first 3 months on the job. According to caseworkers there, high caseloads and the limited time new hires spend in training are often responsible for caseworker turnover. Furthermore, by their supervisors’ estimation, about half of new trainees leave their jobs before completing 1 year. According to these supervisors, many leave, in part, because they are not sufficiently trained and supported to do their jobs. Participation in ongoing training for staff at all levels also appears problematic—caseworkers in each state told us either that available training did not meet their needs or that they did not have time to participate in classes. For example, in Illinois, caseworkers said training was often too time-consuming and irrelevant. They added that, given the administrative burdens of paperwork, they most need training on paperwork management. Furthermore, university Title IV-E program officials in Kentucky said that Title IV-E funds, which support caseworker training and development, cannot be used to provide courses specifically on substance abuse or mental health training, which they noted would be particularly relevant to service delivery. Additionally, caseworkers in all states we visited said that, when training was available, high caseloads and work priorities hindered their attendance. In Kentucky, for example, caseworkers told us that, unless training is required, they do not attend because casework accumulates, discounting the value of the training received. In addition, caseworkers in California said that one program designed to allow part-time work while they pursue an MSW is not practical because caseloads are not reduced and performance expectations do not change. Challenges in training child welfare workers also exist for public agencies that contract with private agencies to provide services. The federal government reimburses states 75 percent for training public agency staff and 50 percent for training private agency employees. In Illinois, where about 80 percent of child welfare services are provided under contract with private agencies, training reimbursement has become a major issue for workforce development. One program director said that many workers have left private child welfare agencies in Illinois because they did not believe that existing training programs adequately prepared them to do their jobs. However, Illinois recently took steps towards addressing these issues by pursuing a waiver from HHS to obtain additional reimbursement for training expenses. According to HHS officials, Illinois is the only state, to date, that has requested and received this spending authority. From Illinois’ officials’ perspectives, however, states that have opted to privatize child welfare services should not be penalized or compelled to apply for a waiver in order to ensure that all service providers are adequately trained. Caseworkers we interviewed in all four states and our analysis of HHS’s CFSRs indicate that recruitment and retention challenges affect children’s safety and permanency by producing staffing shortages that increase the workloads of remaining staff. As a result, they have less time to establish relationships with children and their families, conduct frequent and meaningful home visits in order to assess children’s safety, and make thoughtful and well-supported decisions regarding safe and stable permanent placements. Our analysis of the 27 available CFSRs corroborates caseworkers’ experiences showing that staff shortages, high caseloads, and worker turnover were factors impeding progress toward the achievement of federal safety and permanency outcomes. Although HHS officials told us that they plan to examine these reviews to better understand the relationship between recruitment and retention and safety and permanency outcomes across the states, they have not yet completed this effort. According to the caseworkers we interviewed in each of the four states, staffing shortages and high caseloads disrupt case management by limiting their ability to establish and maintain relationships with children and families. They told us that gathering information to develop and manage a child’s case requires trust between the child and the caseworker. Due to turnover, this trust is disrupted, making it more difficult for caseworkers who assume these cases to elicit from the child the type of information necessary to ensure appropriate care. For example, when staff change, caseworkers may have to reestablish information to update the case record, frustrating all parties involved. Caseworkers noted that families become hesitant to work with unfamiliar caseworkers, making it difficult to learn the history of the case. The negative effects of turnover can be particularly pronounced in group residential care facilities. According to several residential care caseworkers in California and Illinois, worker turnover compounds children’s feelings of neglect and often results in behavior changes that affect their therapeutic treatment plans. These workers said that children channel their feelings of abandonment towards remaining staff, become resistant to therapy, and act violently and aggressively towards other children in the residential facility. In every state we visited, caseworkers said that staffing shortages and high caseloads have had detrimental effects on their abilities to make well- supported and timely decisions regarding children’s safety. Many said that high caseloads require them to limit the number and quality of the home visits they conduct, forcing them to focus only on the most serious circumstances of abuse and neglect. One caseworker in Texas noted that when she does make a home visit, the visit is quick and does not enable her to identify subtle or potential risks to the child’s well-being. Other caseworkers in all four states said that when they assume responsibility for cases as a result of worker turnover, their own caseloads increase and their ability to ensure the safety of the children whose cases they assume is limited. For example, a Texas caseworker told us that, when a former colleague left the agency, he was assigned a case in which the initial investigation had not been done. According to the caseworker, because his own caseload was high before assuming responsibility for the new case, the investigation of the abuse allegation and home visit were delayed by 3 months. As a result of the delay, the claim could no longer be substantiated—the evidence of alleged abuse had healed, no one could corroborate the claim, and the case was closed. By his estimation, if the case initially had been handled more quickly, or if high caseloads were not driving attrition, caseworkers might be better able to identify, mitigate, and/or prevent future situations that could possibly jeopardize children’s safety. Additionally, all of the caseworkers we interviewed told us that transitioning cases to remaining staff takes time and can result in delays or changes to permanency decisions. Caseworkers in Kentucky noted that this is particularly true when they assume responsibility for a case with inadequate documentation. Given their high caseloads and ASFA’s requirements to file for termination of parental rights (TPR) if the child has been in care 15 of the last 22 months, caseworkers have little time to supplement a child’s file with additional investigations and site visits. As a result, they sometimes make permanency decisions without thoroughly evaluating the adequacy and appropriateness of available options. According to private agency officials in Illinois, this type of unsupported decision making is believed to result in placement disruptions, foster care re-entry, or continued abuse and neglect. In addition, supervisors in Texas told us that caseworkers often determine that filing a TPR under the 15-of 22-month provision is not in the best interests of the child when sufficient evidence is not available to support the TPR. In doing so, the caseworkers are able to continue to conduct their casework. Our examination of the 27 completed CFSRs corroborates caseworkers’ statements about the impact of recruitment and retention challenges on children’s safety, permanency, and well-being. Although identifying workforce deficiencies is not an objective of the CFSR process, in all 27 CFSRs we analyzed, HHS explicitly cited workforce deficiencies—high caseloads, training deficiencies, and staffing shortages—that affected the attainment of at least one assessment measure. While the number of affected assessment measures varied by state, we found that HHS cited these factors for an average of nine assessment measures per state. Furthermore, more than half of the 27 states exceeded this average. For example, Georgia’s and Oregon’s CFSRs showed the greatest number of citations related to workforce deficiencies, with high caseloads, training deficiencies, and staffing shortages affecting the attainment of 14 and 16 assessment measures, respectively. Additionally, several states’ CFSRs present useful examples of how high caseloads, limited training, and staffing shortages affect the outcomes for children and families in care. For example, in Georgia, reviewers found that case managers’ caseloads were unreasonably high, limiting their ability to conduct meaningful and frequent visits with families and carry out their responsibilities. Additionally, in New Mexico’s CFSR, reviewers cited staff turnover and vacancies as affecting workers’ responsiveness to cases and decreasing their ability to help children achieve permanency. Finally, the District of Columbia’s CFSR describes heavy workloads, high staff turnover, and a climate in which supervisors often call new workers out of training to handle ongoing caseload activities. Table 1 shows the assessment measures affected by the workforce deficiencies in five or more states. According to officials at HHS, few states have consulted the national resource centers for recruitment-and retention-related guidance, and HHS has not yet made these issues a priority in its technical assistance efforts. Although one center is considering studying the impact of recruitment and retention on federal safety outcomes, an action plan is not yet in place. Additionally, although HHS officials who participated in the CFSR process acknowledge that high caseloads and worker turnover can pose barriers to conformity with federal standards, HHS has not yet analyzed this relationship and does not require states to use their PIPs to address existing recruitment and retention challenges. While HHS has used CFSRs to identify best practices concerning safety and permanency planning, officials said the focus on states’ workforce deficiencies and their impact on safety and permanency outcomes has been limited. HHS attributed this limited focus to the absence of federal standards regarding staffing and case management. Public and private agencies have implemented a variety of workforce practices to address recruitment and retention challenges, but few of these initiatives have been fully evaluated. University partnerships to train current workers or prepare social work students for positions in the child welfare profession are widespread, and two of the four states we visited— Kentucky and California—have demonstrated several benefits of these programs related to recruitment and retention. Additionally, officials and caseworkers in Kentucky and Illinois told us that COA’s standards of lower caseloads, reduced supervisor-to-staff ratios, and increased emphasis on professional credentials have improved their attractiveness to applicants and enhanced worker morale and performance—two factors they noted were critical to retention. Furthermore, improvements to supervision, such as leadership development or mentoring programs, may help alleviate worker stress while other practices, such as the use of competency-based interviews and realistic job previews, also appear to improve agencies’ abilities to hire staff who are better prepared for the job’s requirements. Available evidence suggests that more than 40 state agencies have formed child welfare training partnerships—collaborations between schools of social work and public child welfare agencies—to provide stipends to participating students through use of federal Title IV-E dollars and state contributions. These programs are designed to prepare social work students for careers in the child welfare profession and develop the skills of current workers. The programs require that students receiving stipends for the study of child welfare commit to employment with the state or county public child welfare agency for a specified period of time. The length of the contractual employment obligation—usually 1 to 2 years— and the curriculum content each program offers differ by state and sometimes by university. While few in number, authors of available studies on the impact of Title IV- E training partnerships suggest that they improve worker retention. One study tracked four cohorts of students who participated in a training partnership and found that overall, 93 percent continued to be employed in the child welfare profession—and 52 percent remained with public agencies—well beyond the minimum required by their employment obligation. Furthermore, two of the states we visited, Kentucky and California, conducted similar analyses of employee graduates of Title IV-E programs, each finding that over 80 percent of participants remained with the state agencies after their initial work obligations concluded (see table 2). Kentucky state officials attribute these retention rates, in part, to the intensive coursework, formal internships, and rigorous training included in the curriculum of these training partnerships. Evaluations in Kentucky and California also suggest that training partnerships improved worker competence. In both states, evaluations found that staff hired through specially designed IV-E child welfare programs performed better on the job and applied their training more deftly than employees hired through other means. In their evaluation of Kentucky’s training partnership program, researchers tested all new hires—those who had completed the program and those who did not— after the agency’s core competency training. Controlling for undergraduate grade point averages, the study found that those who completed the training scored better on the agency’s test of core competencies. Additionally, Kentucky supervisors, when surveyed, reported that they considered certification students to be better prepared for their job than other new employees. The California study also compared training partnership participants with nonparticipants and found similar results. Those who participated in training partnerships scored higher on a test of child welfare knowledge and reported greater competency in their work and a more realistic view of child welfare work than those who had not participated. These studies and our discussion with caseworkers in all four states suggest that while training partnerships may increase workers’ skill levels, caseworkers may still feel unprepared for the realities of child welfare practice. The California study cited earlier found that IV-E graduates did not have higher levels of job satisfaction or lower levels of stress than their non-IV-E counterparts, and caseworkers who graduated from the Kentucky certification program told us that even with the training, they still felt unprepared to manage complex cases and were constantly frustrated with the burdens of paperwork documentation. Systemic improvements in managing child welfare, such as accreditation and the enhancement of supervisor skills, help alleviate worker stress by improving the working environment. According to state officials and CWLA staff, accreditation facilitates high-quality service delivery, in part, because it requires reasonable caseloads and reduces the number of staff supervisors must oversee. Additionally, caseworkers and their managers told us that supervisory training that focuses on leadership skills and case management practices improves overall communication and aids in staff decision making. Since 1977, the Council on Accreditation for Children and Family Services has accredited public and private child welfare agencies that comply with organizational, management, and service standards of child, family, and behavioral healthcare services. Only two states—Illinois and Kentucky— have fully accredited child welfare systems, and caseworkers in Illinois and Kentucky told us that adhering to these standards—in particular, those related to caseloads and supervision—has improved their attractiveness to applicants and enhanced worker morale and performance, two factors they noted were critical to retention. COA’s specific standards related to maximum caseload size, supervisor-to-staff ratios, and professional credentials for caseworkers and supervisors are shown in appendix II. According to state officials in both Illinois and Kentucky, accreditation has improved retention and helped their agencies better focus on children’s outcomes. Illinois’ Department of Children and Family Services received its accreditation in June 2000. Since that time, all private agencies that contract with the state agency are reported to have also received accreditation. According to the state’s child welfare director, the pursuit of accreditation stemmed from a court order mandating smaller caseloads for staff and the fact that the agency was confronting receivership and facing increased media scrutiny. According to several Illinois supervisors, accreditation changed the operations of the agency—they now operate with reduced caseloads, improved internal communication, and increased public confidence in the system. Furthermore, to prepare for reaccreditation, staff engage in a routine practice called “peer review” to determine how their caseload management contributes to the state’s safety and permanency outcomes measures. According to one Illinois supervisor, preparing for these peer reviews has united staff in a common goal and increased their attentiveness to service delivery. Kentucky’s Cabinet for Families and Children became accredited in October 2002 and state officials there said that accreditation has helped the agency professionalize child welfare staff by emphasizing appropriate educational backgrounds, improving training, and building pride within the organization. These officials also said that accreditation has strengthened recruitment and improved retention because the agency is focused on hiring qualified people who know what to expect on the job. According to Kentucky supervisors and staff, accreditation was also the driving force behind the creation of the agency’s new MSW stipend program, its push towards continuous service quality improvement for children and families, and higher expectations for staff performance. To obtain these benefits, accreditation requires sustained financial and organizational commitment. Even before applying, agencies devote significant dollars to make their services and practices compliant with COA eligibility standards. This process can entail reforming personnel policies, hiring more staff, or upgrading communication and data systems. Furthermore, the costs associated with 4-year accreditation can range from $5,700 to more than $500,000, depending on an agency’s annual budget. Once accredited, filling vacancies to maintain rigorous caseload standards, for example, becomes a constant and expensive demand on agencies’ resources. According to an HHS Inspector General report on the topic, while many agencies that receive accreditation may be performing well already, accreditation status does not guarantee high-quality service. Caseworkers in Illinois and Kentucky also mentioned this, telling us that they continue to cut corners by limiting home visits or falling behind on their documentation in order to manage both the volume and the complexity of their caseloads. Furthermore, some agencies’ staffing shortages are so severe that implementing COA’s educational requirements might further restrict the pool of qualified applicants. In some cases, personnel standards, such as minimum degree requirements, may conflict with states’ merit systems, particularly those that govern personnel policies and procedures. Unlike Illinois and Kentucky, which were able to revise their position classifications, other states may not be able or interested in complying with this standard. According to a state official in Texas, the state’s child welfare agency has no plans to pursue accreditation because caseloads—though recently reduced—are still well above COA’s standard, and the agency is currently struggling with staff turnover and high vacancy rates. States have taken a number of approaches to enhance staff supervision. In Illinois, all supervisors are required to have an MSW, not only because COA requires it, but also because state officials believe the degree improves managers’ competencies and knowledge. Kentucky is also moving toward requiring MSWs of supervisors for the same reasons. Currently, Kentucky prefers that caseworkers have a minimum of 5 years’ experience before they can be promoted to supervisory positions. Kentucky also has a supervisory development training series that includes topics such as conflict resolution and supervisory skill mastery. Similarly, Texas offers tenured managers courses in decision making, program administration, and leadership. By late 2003, the agency plans to have these managers serving as mentors and leadership coaches for its new supervisors. Kentucky has also taken steps to enhance the mentoring of new caseworkers. A pilot program—designed for new hires that have not participated in the undergraduate IV-E funded child welfare certification program—affords new caseworkers, for their first 3 months on the job, the opportunity to observe and practice newly acquired skills under the tutelage of tenured employees selected for their superior performance in the agency. While an initial assessment of the program indicated that employees’ confidence in their skills improved, additional improvements are underway and must be completed before the program will be implemented across the state. To avoid hiring decisions that may later result in turnover or poor performance, some agencies have begun to develop hiring competencies, use more realistic portrayals of an agency’s mission, and offer recruitment bonuses. While some evidence exists that these practices improve recruitment and retention, few evaluations of their success have been conducted. Many states have created lists of desired worker competencies to evaluate the skills of potential hires and match their expectations with agency needs. The objective of these tools is to select candidates who may be satisfied with and successful in the agency once employed. Although Illinois requires certain academic credentials of all new hires, the state also uses an applicant screening tool to assess the education, writing ability, verbal ability, cultural sensitivity, and ethics and judgment of candidates. The screening requires candidates to complete several verbal or written vignettes that represent realistic situations a child welfare investigator or caseworker might encounter. Candidates are graded on how they resolve situations as well as on technical skills, such as writing and verbal ability. Additionally, recruiters in other states, such as Colorado, Maine, Nebraska, and Wisconsin, require candidates to demonstrate the required competencies in oral and written communication, and explain how their interests, strengths, and academic credentials or experiences fit with child welfare work. Furthermore, Delaware’s child welfare agency and one county in Texas are attempting to maintain new hire pools—reserves of newly hired and trained caseworkers—in order to fill vacancies quickly with competent and well- prepared staff. Agencies have also begun to use “realistic job previews”—videos that portray caseworkers confronting hostile families, working with the courts, and learning agency practices and protocols. Nebraska’s child welfare agency developed a 25-minute realistic job preview video, which is required viewing before any child welfare applicant can even schedule an interview with agency officials. This video—similar to ones that are used in some parts of Texas and California—describes the requirements of maintaining accurate records and tracking children and families’ progress. The video also portrays the camaraderie caseworkers and supervisors may share and documents the emotions caseworkers felt when actions on their cases were either taken or delayed. Furthermore, when piloting its use, researchers in Nebraska found that the realistic job preview prompted ill- suited applicants to self-select out of job competition, allowing the agency to focus its recruitment efforts on the most eager and informed job candidates. Another recruitment and retention practice that appears to help child welfare agencies hire competent staff has been the use of hiring or signing bonuses. Although some child welfare agencies choose instead to work towards more permanent increases in annual compensation packages, child welfare officials in Riverside County, California, who have implemented this practice perceive it as a necessary tool to fill their growing number of vacancies. Furthermore, fields comparable to child welfare, such as nursing—a profession in which an estimated 120,000 positions went unfilled last year—and teaching, have used hiring bonuses in an attempt to reduce their labor shortages. Last year, according to one study, 19 states and the District of Columbia offered incentive programs, such as signing bonuses, to relieve teaching shortages. In Riverside County, the social services department began offering a hiring bonus in June 2000. New hires for one difficult-to-fill caseworker position, which requires an MSW, are currently offered $500 upon hiring, $500 after 6 months, and another $1,000 after 1 year of service. An additional $2,000 is granted annually to these hires until they reach their fifth year of employment with the agency. Little evidence exists across occupations to determine whether or not incentive programs, such as bonuses, actually work to recruit and retain employees. In Riverside County, human resource managers said that they credit the monetary incentive with improving their ability to hire more qualified workers, reduce turnover, and improve service to clients. The county has not determined, however, what percentage of those hired under the bonus plan have remained with the agency after 2 years on the job. Furthermore, Riverside has not done any studies to isolate the impact of the bonus on employees’ decisions to stay. Available evidence suggests that public and private child welfare agencies are experiencing difficulty hiring, training, and retaining their workforces. The absence of a stable, skilled, and attentive workforce threatens these agencies’ ability to provide services for the more than 800,000 children estimated to spend some time in foster care each year. For example, when staff shortages lead to additional casework that delays decision-making, states have taken advantage of the ASFA exemptions to the 15-of 22-month provision intended to move children more quickly into permanent homes. While interviews with child welfare workers in four states and our examination of CFSRs indicate that workforce issues impair agencies’ abilities to meet children’s needs, several workforce practices do appear to improve recruitment and retention. HHS’s role in identifying and addressing the challenges agencies face, however, has been limited. For example, HHS has not yet prioritized its research agenda to identify and/or assess promising workforce practices. Additionally, it has not provided targeted assistance to states to ensure that their PIPs adequately address the caseload, training, and staffing issues cited in the CFSR process. Engaging in such activities could enhance states’ capacities to improve their performance on safety and permanency assessment measures, resulting in improved outcomes for children. Because of the reported impact staffing shortages and high caseloads have on the attainment of federal outcome measures, we recommend that the Secretary of HHS take actions that may help child welfare agencies address the recruitment and retention challenges they face. Such efforts may include HHS (1) using its annual discretionary grant program to promote targeted research on the effectiveness of perceived promising practices and/or (2) issuing guidance or providing technical assistance to encourage states to use their program improvement plans to address the caseload, training, and staffing issues cited in the CFSR process. We obtained comments on a draft of this report from HHS’s Administration for Children and Families. These comments are reproduced in appendix III. ACF also provided technical clarifications, which we incorporated when appropriate. ACF generally agreed with our findings and said that our report highlights many of the concerns that the department identified in its analysis of the 32 Child and Family Services Reviews completed to date. Specifically, ACF noted that a direct relationship was found between the consistency and quality of caseworker visits with children and families and the achievement of case outcomes evaluated in the reviews. ACF also confirmed that high caseloads are a major factor in staff turnover for those states in which a review was completed. ACF also concurred with our recommendation, saying that it has begun to explore the effectiveness of child welfare training programs, with an emphasis on lessons learned and best practices. However, ACF stressed that it has no authority to require states to address caseload issues in their program improvement plans or to enforce any caseload standard. Further, although ACF agreed that high caseloads also impact the ability of child welfare agencies to help families achieve positive outcomes, it said that the federal government has limited resources to assist states in the area of staff recruitment and retention and noted that technical assistance offered by the 10 resource centers is focused specifically on those areas, such as permanency timeframes, where federal legislative or regulatory requirements exist that states must achieve. We believe that ACF’s stated actions represent a first step and, as we recommended, that it should take additional actions to help child welfare agencies address other facets of their recruitment and retention challenges. We also provided a copy of our draft report to child welfare officials in the four states we visited--California, Illinois, Kentucky, and Texas. Each of these states generally agreed with our findings and provided various technical comments, which we also incorporated when appropriate. We are sending copies of this report to the Secretary of Health and Human Services, state child welfare directors, and other interested parties. We will make copies available to others on request. If you or your staff have any questions or wish to discuss this material further, please call me at (202) 512-8403 or Diana Pietrowiak at (202) 512-6239. Key contributors to this report are listed in appendix IV. This report is available at no charge on GAO’s Web site at http://www.gao.gov. In order to characterize the reasons for employee turnover, we engaged in the first known national attempt to obtain and classify exit interview documents from former child welfare caseworkers and supervisors. To begin this analysis, we designed a survey to learn (1) how many agencies were conducting and documenting exit interviews with staff who severed their employment and (2) if these agencies would be willing to share these documents with us. We distributed the survey to the directors of all 40 state-administered child welfare agencies (including the District of Columbia) and to a state-stratified sample of directors from 444 county child welfare agencies in each of 10 county-administered states. In addition, we sent our survey to a random sample of 281 private child welfare agencies from a universe of 945 with Child Welfare League of America (CWLA) membership. Responses to this survey indicated that 18 states, 39 counties, and 51 private agencies were conducting, documenting, and willing to share the exit interviews of staff who severed their employment between January 1 and May 31, 2002. After follow up, we obtained and analyzed a total of 585 exit interview documents from 17 states, 40 counties, and 19 private child welfare agencies across the country. In addition, we received and reviewed summary reports—in lieu of or to supplement actual exit interview documents—from 5 states and 7 counties. Because of the low number of responses, we were unable to generalize the results of our analysis beyond the data actually received. In addition to the exit interview analysis, we conducted interviews with about 50 child welfare practitioners and researchers to determine which states were experiencing recruitment and retention challenges and how these were being addressed. We obtained and reviewed relevant literature and selected four states in which to conduct comprehensive site visits— California, Illinois, Kentucky, and Texas. We chose these states in part due to their geographic diversity, the variation in their caseload sizes, and their abilities to provide both urban and rural perspectives on the issues. These states also varied in terms of two important characteristics of child welfare programs—county versus state administration and reliance on private agencies for the delivery of services. In each state, we interviewed management, current caseworkers, and supervisors at various private and public agencies; obtained and reviewed relevant agency documents and data on vacancy, turnover, salary, and caseload rates; and talked with appropriate child welfare associations, advocacy groups, and researchers. To determine the extent to which recruitment and retention challenges affect children’s safety, permanency, and well-being, we analyzed the 27 Child and Family Services Reviews (CFSRs) that the Department of Health and Human Services (HHS) had completed and released to us by December 1, 2002. Specifically, we conducted a content analysis, noting each instance in which HHS explicitly cited high caseloads, insufficient training, and staffing shortages as affecting the attainment of all 45 CFSR assessment measures. In addition to the CFSR analysis, we obtained evidence on the link between recruitment and retention challenges and outcomes from conversations with caseworkers and managers during our site visits and from available research on the topic obtained through consultation with researchers and practitioners. To determine the workforce practices public and private agencies have implemented to confront recruitment and retention challenges, we relied on site visits to the four states, interviews with experts and researchers, and relevant studies that highlighted those strategies with promise. We were not able to conclusively determine whether such strategies were or will be successful, because most agencies did not conduct research that could isolate the effect of the practices we investigated. We conducted our work between March 2002 and January 2003 in accordance with generally accepted government auditing standards. Appendix II: Selected Council on Accreditation for Children and Family Services Standards Standard At a minimum, personnel assigned to the child protective service have (a) a master’s degree in social work or a comparable human service field from an accredited institution and 2 years of direct practice experience or (b) a bachelor’s degree in social work or a comparable human service field and supervision by a person with a master’s degree in social work or a comparable human service field who has 2 years of experience in the delivery of child protective services. Direct service personnel are qualified according to the following criteria: (a) previous experience in providing adoption services or family and children services, (b) a bachelor’s degree from an accredited program of social work education, or (c) a bachelor’s degree in another human service field. COA Interpretation (S14.10.02): Recently hired direct service providers who do not have prior experience in adoption receive 10 or more hours of in-service adoption training per year. Family foster care and kinship care workers have (a) an advanced degree from an accredited program of social work education or a comparable human service field or (b) a bachelor’s degree in social work or a related human service field, with supervision by a person with an advanced degree in social work or a comparable human service field who has at least 2 years’ experience in services to families and children. The kinship care service is staffed according to the following: (a) kinship care workers have a bachelor’s in social work or another related human service field and (b) supervisors possess an advanced degree from an accredited program of social work education or another comparable human service field and have experience working with families and children. Residential counselors and/or child care workers have (a) a bachelor’s degree (If a few extensively experienced and highly trained persons lack a bachelor’s degree and/or are in the process of obtaining the degree, their training and experience is thoroughly documented.); (b) the personal characteristics and experience to provide appropriate care to residents, win their respect, guide them in their development, manage a home effectively, and participate in the overall treatment program; (c) the temperament to work with and care for children, youth, or adults with special needs, as appropriate; and (d) basic skills in first-aid and the identification of medical needs. Direct service providers/practitioners are qualified by (a) an advanced degree in social work or a comparable human service field from an accredited institution and at least 2 years’ experience in family and children’s services and/or (b) a bachelor’s degree in social work or another human service field from an accredited institution and at least 3 years’ post-degree experience in family and children’s services. COA Interpretation (S20.7.02): It is common for an interdisciplinary team to work collaboratively with families. This team may be comprised of individuals from the following fields: social work, mental health, special education, health (including nursing and public health), and juvenile justice. Examples of acceptable exceptions, if they represent a small percentage of the whole, include a BSW with only 2 years of post-degree experience or an MSW with experience in another area of practice not directly applicable to family centered services. Standard Under no circumstances does a child protective worker’s caseload exceed (a) 15 cases at one time that involve intensive intervention or investigation; (b) 30 cases at one time that involve case coordination, continuing services, or follow-up; and/or (c) a proportionate mix of the above. COA Interpretation (S10.7.07): A child protective service case is defined as a child, unless a family assessment model or equivalent is used. In this situation, the organization must provide average caseload sizes under categories (a) and (b) and a rationale. The organization structures its services so that adoption caseloads (a) do not exceed 25 families per worker when counseling birth families, preparing and assessing adoptive applicants for infant placements, and supporting these families following placement; (b) do not exceed 12 children per worker when preparing children for adoption who are older or who have special needs; (c) do not exceed 15 families per worker when preparing and assessing adoptive applicants for the placement of children who are older or have special needs and providing support to these families following placement; and (d) are adjusted for case complexity, travel, and nondirect service time. Caseloads for family foster and kinship workers do not exceed 18 children, and workers are able to perform their functions within these guidelines. Treatment foster care workers have caseloads of no more than 8 treatment foster care children. Kinship care caseload sizes do not exceed 12-15 families per worker. COA note: Reviewers may vary caseload limits set by rating indicators if the organization can demonstrate that (1) its workers do not have responsibility for a major, routine component of case work (i.e., planning); and (2) a time study has been done to adequately justify the organization’s caseload limits. Caseloads for direct care personnel do not exceed 12 residents. For family-centered casework programs, caseloads are generally limited to 12 or fewer cases per direct service provider and are adjusted downward according to (a) internal organizational procedures governing caseload size that address the relationship between target population needs, duration and intensity of service, the number of service hours needed based on the issues presented, and the personnel model chosen by the organization; (b). the size of teams, if the service is team-delivered; (c) the need for extra attention in high-risk families; and (d) the need for balance between families at beginning stages of work, families moving toward termination, and families presenting different levels of need. For intensive family preservation programs, the organization limits caseloads to approximately 2 to 6 families per direct service provider or team and, within that range, caseloads are adjusted according to (a) internal organization procedures governing caseload size that address the relationship between target population needs, duration and intensity of service, the number of service hours needed based on the issues presented and the personnel model chosen by the organization; (b) the need for extra attention in cases where there is active suicidal, homicidal or assault behavior, failure-to- thrive or severe neglect, or increased degree of risk of harm to children, families, or the community; and (c) the need for balance between families at the beginning stages of work, families moving toward termination, and families presenting different levels of need. Standard A child protective service supervisor is responsible for supervising no more than (a) seven workers who are experienced and professionally trained and/or (b) five workers who have less professional education and experience. The maximum supervisor to caseworker ratio is 1:5. The standards for supervisory workloads are: (a) one full-time equivalent supervisor for each of five to eight practitioners or teams and (b) appropriately modified for total number of families represented, experience levels of practitioners, geographic distances, size of teams, and other relevant factors. In addition to those named above, Gwendolyn Adelekun, Nancy Cosentino, and Nila Garces made key contributions to this report. Barbara Alsip, Avrum Ashery, Patrick DiBattista, Catherine Hurley, and Luann Moy also provided key technical assistance. Albers, Eric C., et al. Children in Foster Care: Possible Factors Affecting Permanency Planning, Child and Adolescent Social Work Journal, Vo. 10, No. 4, August 1993. Alliance for Children and Families (Alliance), American Public Human Services Association (APHSA), and Child Welfare League of America (CWLA). The Child Welfare Workforce Challenge: Results from a Preliminary Study. Presented at Finding Better Ways, Dallas, Texas. May 2001. Alwon, Floyd J. and Andrew L. Reitz. “Empty Chairs.” Children’s Voice. Child Welfare League of America. November 2000. American Federation of State, County, Municipal Employees. Double Jeopardy: Caseworkers at Risk Helping At-Risk Kids: A Report on the Working Conditions Facing Child Welfare Workers, 1998. Barbee, A.P. “Creating a Chain of Evidence for the Effectiveness of Kentucky’s Training System.” For CFSR. March 2003. Barbee, A.P., et al. “The Importance of Training Reinforcement in Child Welfare: Kentucky’s Field Training Specialist Model. Child Welfare (forthcoming). Barth, Michael C. and Yvon Pho. “The Labor Market for Social Workers: A First Look.” Prepared for the John A. Hartford Foundation, Inc. February 2001. Bernotavicz, Freda and Amy Locke Wischmann. “Hiring Child Welfare Caseworkers: Using a Competency-Based Approach.” Public Personnel Management. International Personnel Management Association. Spring 2000. California Alliance of Child and Family Services. Comparison of Foster Care Funding for the Wages of Child Care Workers and Social Workers in Group Homes with Wages in Other Occupations. July 1, 2001. Child Welfare League of America, Research to Practice Initiative. Annotated Bibliography - Child Welfare Workforce. June 2002. Child Welfare League of America. Standards of Excellence for Family Foster Care Services. 1995. Child Welfare League of America. Standards of Excellence for Kinship Care Services, 2000. Child Welfare League of America “Minimum Education Required by State Child Welfare Agencies, Percent, By Degree Type, 1998.” State Child Welfare Agency Survey. 1999 Cicero-Reese, Bessie and Phyllis N. Clark. “Research Findings Suggest Why Child Welfare Workers Stay on Job.” Partnerships for Child Welfare News Letter. Vol. 5 No. 5. February 1998. Council on Accreditation Standards and Self- Study Manual. 7th Edition, 2001. Cyphers, Gary. Report from the Child Welfare Workforce Survey: State and County Data and Findings. American Public Human Services Association. May 2001. Dhooper, S.S., D.D Royse, and L.C. Wolfe. “Does Social Work Education Make a Difference?” Social Work, Vol. 35. No. 1. 1990. Dickinson, Nancy S., and Robin Perry. Do MSW Graduates Stay in Public Child Welfare? Factors Influencing the Burnout and Retention Rates of Specially Educated Child Welfare Workers. The California Social Work Education Center. University of California at Berkeley, August 1998. Doelling, Carol Nesslein, and Barbara Matz. Social Work Career Development Group. Job Market Report on 2000 MSW Graduates. George Warren Brown School of Social Work. Washington University, St. Louis, MO. N.p., n.d. Doelling, Carol Nesslein, and Karen Joseph Robards. Excerpts from 1996- 2000 Alumni Survey Self-Study Report. George Warren Brown School of Social Work. Washington University in St. Louis, St. Louis, MO. August 2001. Fox, S., D. Burnham, A.P. Barbee, and P. Yankeelov. “Public School to Work: Social Work that is! Maximizing Agency/University Partnerships in Preparing Child Welfare Workers.” Training and Development in Human Services, I. 2000 Fox, S., V. Miller, and A. P. Barbee. “Finding and Keeping Child Welfare Workers: Effective Use of Title IV-E Training Funds.” Journal of Human Behavior in the Social Environment (forthcoming). Gansle, Kristin and Bert Ellett. “Louisiana Title IV-E Program Begins Evaluation Process.” Partnerships for Child Welfare, Vol. 5, No. 5. February 1998. Graef, Michelle I. and Erick L. Hill. “Costing Child Protective Services Staff Turnover.” Child Welfare. Sept/Oct. 2000. Jones, Loring P. and Amy Okamura. “Reprofessionalizing Child Welfare Services: An Evaluation of Title IV-E Training.” Research on Social Work Practice. September 2000. Malm, Karin, et al. Running to Keep in Place: The Continuing Evolution of Our Nation’s Child Welfare System. Urban Institute, Occasional Paper Number 54. October 2001. Meyer, Lori. “State Incentive Programs for Recruiting Teachers. Are They Effective in Reducing Shortages?” Issues in Brief, National Association of State Boards of Education. October 2002. The Network for Excellence in Human Services. Workforce Analysis for Riverside County Department of Public Social Services. October 2001. The Network for Excellence in Human Services. Workforce Analysis for Imperial County Department of Social Services. March 2001. New York State Office of Children and Family Services, Bureau of Training. 2001 Caseworker Turnover Survey. May 2002. Pasztor, Eileen Mayers, et al. Demand for Social Workers in California. California State University, Long Beach. April 2002. Robin, S. and C.D. Hollister. “Career Paths and Contributions of Four Cohorts of IV-E Funded MSW Child Welfare Graduates.” Journal of Health and Social Policy, Vol. 15, No. 3/4. 2002. Scannapieco, Maria and Kelli Connell-Carrick. “Do Collaborations with Schools of Social Work Make a Difference for the Field of Child Welfare? Practice, Retention, and Curriculum.” Journal of Human Behavior in the Social Environment. 2003. South Carolina Association of Children’s Homes and Family Services. Comparative Study of Salaries and Benefits of Direct Care Workers in Member Agencies and Selected South Carolina State Government Positions. Lexington, S.C.: January 2000. U.S. Department of Health and Human Services, Administration for Children and Families, Administration for Children, Youth, and Families, Children’s Bureau. Changing Paradigms of Child Welfare Practice: Responding to Opportunities and Challenges. 1999 Child Welfare Training Symposium. June 1999. U.S. Department of Health and Human Services, Administration for Children and Families, Administration for Children, Youth, and Families, Commission’s Office of Research and Evaluation, and the Children’s Bureau. National Survey of Child and Adolescent Well Being (NSCAW). State Child Welfare Agency Survey: Report. June 2001. U.S. Department of Health and Human Services, Office of Inspector General, Office of Evaluation and Investigations. Accreditation Of Public Child Welfare Agencies. March 1994. OEI-O4-94-00010. U.S. Department of Labor, Bureau of Labor Statistics. 2000 National Occupational Employment and Wage Estimates. Zlotnik, Joan Levy. “Enhancing Child Welfare Service Delivery: Promoting Agency-Social Work Education Partnerships.” Policy and Practice, Vol. 59, No. 1. 2001. Zlotnik, Joan Levy. “Selected Resources on the Efficacy of Social Work for Public Child Welfare Practice.” Council on Social Work Education, June 11, 1999. Foster Care: Recent Legislation Helps States Focus on Finding Permanent Homes for Children, but Long-Standing Barriers Remain. GAO-02-585. Washington, D.C.: June 28, 2002. District of Columbia Child Welfare: Long-Term Challenges to Ensuring Children’s Well-Being. GAO-01-191. Washington, D.C.: December 29, 2000. Child Welfare: New Financing and Service Strategies Hold Promise, but Effects Unknown. GAO/T-HEHS-00-158. Washington, D.C.: July 20, 2000. Foster Care: States’ Early Experiences Implementing the Adoption and Safe Families Act. GAO/HEHS-00-1. Washington, D.C.: December 22, 1999. Foster Care: HHS Could Better Facilitate the Interjurisdictional Adoption Process. GAO/HEHS-00-12. Washington, D.C.: November 19, 1999. Foster Care: Effectiveness of Independent Living Services Unknown. GAO/HEHS-00-13. Washington, D.C.: November 10, 1999. Foster Care: Kinship Care Quality and Permanency Issues. GAO/HEHS- 99-32. Washington, D.C.: May 6, 1999. Juvenile Courts: Reforms Aim to Better Serve Maltreated Children. GAO/HEHS-99-13. Washington, D.C.: January 11, 1999. Child Welfare: Early Experiences Implementing a Managed Care Approach. GAO/HEHS-99-8. Washington, D.C.: October 21, 1998. Foster Care: Agencies Face Challenges Securing Stable Homes for Children of Substance Abusers. GAO/HEHS-98-182. Washington, D.C.: September 30, 1998. Child Protective Services: Complex Challenges Require New Strategies. GAO/HEHS-97-115. Washington, D.C.: July 21, 1997. Foster Care: State Efforts to Improve the Permanency Planning Process Show Some Promise. GAO/HEHS-97-73. Washington, D.C.: May 7, 1997. Foster Care: State Efforts to Expedite Permanency Hearings and Placement Decisions. GAO/T-HEHS-97-76. Washington, D.C.: February 27, 1997. Child Welfare: States’ Progress in Implementing Family Preservation and Support Activities. GAO/HEHS-97-34. Washington, D.C.: February 18, 1997. Permanency Hearings for Foster Children. GAO/HEHS-97-55R. Washington, D.C.: January 30, 1997. Child Welfare: Complex Needs Strain Capacity to Provide Services. GAO/HEHS-95-208. Washington, D.C.: September 26, 1995. Child Welfare: Opportunities to Further Enhance Family Preservation and Support Activities. GAO/HEHS-95-112. Washington, D.C.: June 15, 1995. Foster Care: Health Needs of Many Young Children Are Unknown and Unmet. GAO/HEHS-95-114. Washington, D.C.: May 26, 1995.
A stable and highly skilled child welfare workforce is necessary to effectively provide child welfare services that meet federal goals. This report identifies (1) the challenges child welfare agencies face in recruiting and retaining child welfare workers and supervisors, (2) how recruitment and retention challenges have affected the safety and permanency outcomes of children in foster care, and (3) workforce practices that public and private child welfare agencies have implemented to successfully confront recruitment and retention challenges. Child welfare agencies face a number of challenges in recruiting and retaining workers and supervisors. Low salaries, in particular, hinder agencies' ability to attract potential child welfare workers and to retain those already in the profession. Additionally, caseworkers GAO interviewed in all four of the states GAO visited cited high caseloads and related administrative burdens, which they said took from 50 to 80 percent of their time; a lack of supervisory support; and insufficient time to take training as issues impacting both their ability to work effectively and their decision to stay in the child welfare profession. Most of these issues also surfaced in GAO's analysis of 585 exit interviews completed by child welfare staff across the country who voluntarily severed their employment. According to caseworkers GAO interviewed, high turnover rates and staffing shortages leave remaining staff with insufficient time to establish relationships with children and families and make the necessary decisions to ensure safe and stable permanent placements. GAO's analysis of HHS's state child welfare agency reviews in 27 states corroborated caseworker accounts, showing that large caseloads and worker turnover delay the timeliness of investigations and limit the frequency of worker visits with children, hampering agencies' attainment of some key federal safety and permanency outcomes. Child welfare agencies have implemented various workforce practices to improve recruitment and retention--including engaging in university-agency training partnerships and obtaining agency accreditation, a goal achieved in part by reducing caseloads and enhancing supervision--but few of these initiatives have been rigorously evaluated.
CERCLA was passed in late 1980, in the wake of the discovery of toxic waste sites such as Love Canal, and it created a mechanism for responding to existing contamination. CERCLA established a trust fund from which EPA receives annual appropriations for Superfund program activities. The Superfund trust fund has received revenue from four major sources: taxes on crude oil and certain chemicals, as well as an environmental tax assessed on corporations based on the taxable income; appropriations from the general fund; fines, penalties, and recoveries from responsible parties; and interest accrued on the balance of the fund. In the program’s early years, dedicated taxes provided the majority of revenue to the Superfund trust fund. However, in 1995, the authority for these taxes expired and has not been reinstated. Since 2001, appropriations from the general fund have constituted the largest source of revenue for the trust fund. After the expiration of the tax authority, at the start of fiscal year 1997, the trust fund balance reached its peak of $5.0 billion; in 1998, the trust fund balance began decreasing. Figure 1 shows changes in the balance of the Superfund trust fund from fiscal years 1981 through 2009. At the start of fiscal year 2009, the trust fund had a balance of $137 million. EPA’s Superfund program receives annual appropriations from the trust fund, which is in turn supported by payments from the general fund. Since fiscal year 1981, the annual appropriation to EPA’s Superfund program has averaged approximately $1.2 billion in noninflation adjusted (nominal) dollars. Since fiscal year 1998, however, congressional appropriations have generally declined when adjusted for inflation. Figure 2 shows appropriation levels in nominal and constant 2009 dollars since fiscal year 1981. The Superfund cleanup process begins with the discovery of a potentially hazardous site or the notification to EPA of possible releases of hazardous substances that may threaten human health or the environment. Citizens, state agencies, EPA regional officials, and others may alert EPA to such threats. EPA regional offices use a screening system, called the HRS, to numerically assess the potential of sites to pose a threat to human health and the environment. The HRS scores sites on four possible pathways of exposure: groundwater, surface water, soil, and air. Those sites with sufficiently high scores are eligible for proposal to the NPL. EPA regions submit sites to EPA headquarters for possible listing on the NPL on the basis of a variety of factors, including the availability of alternative state or federal programs that may be used to clean up the site. EPA has considered the NPL the “tool of last resort”; thus, EPA has looked to alternative EPA and individual state programs for hazardous waste cleanup before listing a site on the NPL. However, according to EPA headquarters officials, EPA’s use of the NPL as a tool of last resort has recently changed, and EPA now views the NPL as one of a number of cleanup options and uses whichever option is most appropriate for site cleanup. In addition, EPA officials noted that, as a matter of policy, EPA seeks concurrence from the Governor of the state or environmental agency head in which the site is located before listing the site. Sites that EPA decides that it would like to list on the NPL are proposed for listing in the Federal Register. After a period of public comment, EPA reviews the comments and decides whether to formally list the sites as “final” on the NPL. Once EPA lists a site, it initiates a process to investigate the extent of the contamination, decide on the actions that will be taken to address contamination, and implement those actions. This process can take many years—or even decades. Figure 3 outlines the process EPA typically follows, from listing a site on the NPL through deleting it from the NPL. Specifically, after a site is listed, EPA or a responsible party will begin the remedial process by conducting a two-part study of the site: (1) a remedial investigation to characterize site conditions and assess the risks to human health and the environment, among other actions, and (2) a feasibility study to evaluate various options to address the problems identified through the remedial investigation. The culmination of these studies is a ROD, which identifies EPA’s selected remedy for addressing the site’s contamination. Cleanup at a site is often divided into smaller units (operable units) by geography, pathways of contamination, or type of remedy. A ROD typically lays out the remedy for one operable unit at a site, and it contains the cost estimate for implementing the remedy. According to EPA guidance, EPA develops the cost estimate in the ROD to be within an accuracy range of minus 30 to plus 50 percent of the actual costs. EPA may develop earlier estimates of construction costs, but as the site moves from the study phase into the remedial action phase, the level of project definition increases, thus allowing for a more accurate cost estimate. EPA may develop more refined cost estimates after the ROD. Because more information is available during remedial design and remedial action, the accuracy of these estimates is expected to be greater than the accuracy of the ROD estimates. According to GAO’s cost estimating and assessment guide, every cost estimate is uncertain because of assumptions that must be made about future projections, and cost estimates tend to become more certain as actual costs begin to replace earlier estimates. The selected remedy is then designed during remedial design and implemented with remedial actions, when actual cleanup of the site begins. When all physical construction at a site is complete, all immediate threats have been addressed, and all long-term threats are under control, EPA generally considers the site to be construction complete. Most sites then enter into the operation and maintenance phase when the responsible party or the state maintains the remedy, and EPA ensures that the remedy continues to protect human health and the environment. However, for certain remedial actions, additional work at a site may be required after construction is completed, such as continuing groundwater restoration efforts or monitoring the site to ensure that the remedy remains protective. For EPA-lead remedial actions that have a groundwater or surface water restoration component, EPA funds the necessary activities—known as long-term response actions—for up to 10 years before turning over these responsibilities to the state. Eventually, when EPA and the state determine that no further site response is needed, EPA may delete the site from the NPL. Although most sites progress through the cleanup process in roughly the same way, EPA may take different approaches based on site-specific conditions. In fiscal year 2009, EPA received about $1.29 billion for the Superfund program, of which $605 million was for the remedial program. Of this amount, EPA allocated $125 million for preconstruction activities— remedial investigation, feasibility study, and remedial design activities—as well as other nonconstruction activities, including conducting prelisting activities through cooperative agreements with states, oversight of all responsible party-lead activities, and providing general support and management. In addition, EPA allocated $267 million for remedial actions. EPA allocated the remaining $213 million for headquarters and regional personnel to implement and oversee the overall program; for site management; and for providing technical and analytical support for all non-NPL sites as well as proposed, final, and deleted NPL sites. In addition to remedial actions, the Superfund program conducts removal actions at both NPL and non-NPL sites that are usually short-term cleanups for sites that pose immediate threats to human health or the environment. Examples of removal actions include excavating contaminated soil, erecting a security fence, or taking abandoned drums to a proper disposal facility to prevent the release of hazardous substances into the environment. CERCLA limits EPA removal actions paid for with trust fund money to actions lasting 12 months or less and costing $2 million or less, although these limits can be exceeded if EPA determines that conditions for such an exemption are met. To document and communicate environmental progress toward cleaning up Superfund sites, EPA adopted a human exposure indicator in fiscal year 2002. The indicator was applied to Superfund to communicate progress made in protecting human health through site cleanup activities. In addition, EPA uses the indicator in its annual Government Performance and Results Act reporting. Specifically, on an annual basis, EPA reports the number of Superfund sites at which human exposure was controlled during the most recent fiscal year. EPA identifies a site as having unacceptable human exposure when data indicate that the level of contamination and the frequency or duration of human exposure associated with certain pathways—or routes of exposure—at the site present unacceptable risks to humans. EPA assesses human exposure on a site-wide basis; therefore, if any part of a Superfund site has unacceptable human exposure, EPA classifies the whole site as such. If sufficient and reliable information is not yet available to determine whether a site has unacceptable human exposure, the site is classified as having insufficient data to determine whether there is unacceptable human exposure, or “unknown.” Threats to human health and the environment may be present in the four pathways scored on the HRS—groundwater, surface water, soil, and (outdoor) air; however, contaminants may also migrate from groundwater or soil and seep into the air of homes or commercial buildings. This movement of contaminants—typically from petroleum or chlorinated solvents—to indoor air is known as vapor intrusion and has been the subject of increasing research and scientific discussion since the 1980s. Intrusion of contaminated gases into indoor air may lead to fire; explosion; and acute, intermediate, and chronic health effects. Though EPA conducts investigations of vapor intrusion for some sites on the NPL, the HRS does not include a separate pathway for scoring vapor intrusion threats. At over 60 percent of the 75 nonfederal NPL sites with unacceptable human exposure, all or more than half of the work remains to complete remedial construction, as is the case with over 60 percent of the 164 nonfederal NPL sites with unknown human exposure, according to EPA regional officials’ responses to our survey. Moreover, while EPA has expended a total of $3 billion on the 75 sites with unacceptable exposure, EPA headquarters and regional officials told us that some of these sites have not received sufficient funding for cleanup to proceed in the most efficient manner. At 49 of the 75 nonfederal NPL sites that EPA has identified as having unacceptable human exposure, all or more than half of the work remains to complete remedial construction, according to EPA regional officials’ responses to our survey (see fig. 4). At each of the 15 sites where none of the remedial construction work has been completed, EPA or a responsible party has conducted at least one interim cleanup action, such as a removal, and has initiated or completed a remedial investigation; however, all of the construction work remains, and EPA has determined that human exposure risks continue at these sites. In addition, at the remaining 60 sites where some construction actions have been taken, EPA has determined that human exposure risks have not yet been controlled. For example, at the Lava Cap Mine site in California, EPA has eliminated the exposure to mine tailings—finely ground waste created in the ore extraction process—in the mine area by capping it; however, recreational users of the area downstream of the mine can be exposed to mine tailings in that area, potentially leading to incidental ingestion of arsenic in soil, inhalation of contaminated airborne particulates, or skin contact with contaminated sediments along the shoreline. According to EPA Region 9 officials, EPA is currently investigating methods to stabilize and cover these mine tailings to eliminate the risk of human exposure. According to EPA regional officials’ responses to our survey, EPA has plans to control human exposure at all of the 75 sites with unacceptable human exposure; however, our survey results also show that EPA regional officials expect 41 of the sites to continue to have unacceptable exposure until fiscal year 2015 or later. According to an EPA headquarters official responsible for overseeing the human exposure indicator, some sites will continue to pose unacceptable human exposure for a long time because of the type of contamination and cleanup required. For example, it may take several years for the risk of human exposure to be eliminated at the Sheboygan Harbor & River site in Wisconsin—which was listed on the NPL in fiscal year 1986—because of high PCB levels in fish. The site currently poses a risk of human ingestion of PCB and heavy metals, including arsenic, chromium, copper, lead, and zinc, in contaminated fish, which can cause health problems including cancer, liver disease, and problems with the immune and endocrine systems. There is a fish advisory in place, signs are posted in the area warning against fish consumption, and, for the last several years, there has been ongoing removal of sediment and soil contaminated with PCB and heavy metals. However, according to EPA, exposure to PCB may continue even after a significant amount of PCB is removed from the river, because it takes several years for PCB levels in fish to decline, and people continue to consume fish from the area. According to EPA headquarters officials, approximately one-third of the sites with unacceptable human exposure have been identified as such because of ongoing consumption of contaminated fish despite actions having been taken to prevent exposure. Appendix III contains a detailed description of the risks present at the 75 sites. Like the sites with unacceptable human exposure, over 60 percent, or 105, of the 164 sites with unknown human exposure have all or most of the work to complete remedial construction remaining, according to EPA regional officials’ responses to our survey (see fig. 5). The majority of the 83 sites with unknown human exposure that have all of the work remaining to complete construction are in the remedial investigation phase, which is when EPA usually determines a site’s human exposure status, according to EPA guidance. EPA may also designate a site as having unknown human exposure during the construction phase of work, or after a site has met the construction complete milestone, if new information suggests that there may be risk at the site, or if an investigation is under way to assess a potential exposure pathway not previously analyzed, such as vapor intrusion. For example, the Waite Park Wells site in Minnesota reached construction complete status in 1999 but, during a review of the continuing effectiveness of the remedy performed in 2005, EPA found potential exposure from vapor intrusion to businesses from trichloroethylene (TCE) in groundwater. EPA Region 5 officials told us that EPA designated this site as having an unknown risk of human exposure until it evaluates a vapor intrusion assessment conducted by responsible parties. EPA expects to determine whether there is unacceptable human exposure at most of the 164 sites by fiscal year 2012. 12. According to EPA regional officials’ responses to our survey, human exposure risks at the 164 sites may be posed by a variety of contaminants in various media, including soil, sediment, and fish. Beginning around 2003, EPA regions began performing investigations for vapor intrusion, which they saw as an emerging problem, according to EPA officials. Currently, according to EPA regional officials’ responses to our survey, 60 of the 164 sites may pose risks because of vapor intrusion. At the Lusher Street Groundwater Contamination site in Indiana, for example, EPA has not yet evaluated the vapor intrusion pathway, but officials said that they know the site could pose a vapor intrusion risk to human health because a contaminated groundwater plume is present in a mixed residential and industrial area. From the inception of the Superfund program through the end of fiscal year 2009, EPA expended a total of $3 billion in constant 2009 dollars on the 75 sites with unacceptable exposure; however, in managing limited resources, EPA regional officials noted that some of these sites did not receive funding to clean up the sites in the most time and cost efficient manner. According to EPA regional officials’ responses to our survey, the estimated cost of completing construction at 36 of the 75 sites with unacceptable exposure at which EPA is funding remedial actions will be about $3.9 billion. EPA regional officials said that they could not provide cost estimates for an additional 7 sites because the sites are too early in the cleanup phase. For the remaining 32 sites, these officials do not expect EPA to incur remedial construction costs because they expect responsible parties to fully fund the remedial actions at 26 sites, have identified 4 sites as construction complete, and EPA has already fully funded the remedial actions with Recovery Act funds at 2 sites. In addition, EPA expended $1.2 billion in constant 2009 dollars on the 164 sites where exposure is unknown. At 48 of the 164 sites with unknown human exposure, EPA regional officials estimated that the cost to complete construction will be about $601 million. These officials were not able to provide cost estimates for an additional 32 sites because the sites are too early in the cleanup phase. For the remaining 84 sites, these officials do not expect EPA to incur remedial construction costs because they expect responsible parties to fully fund the remedial actions at 52 sites and have identified 32 sites as construction complete. Even though EPA officials noted that EPA does not use the human exposure indicator to determine risk or to prioritize sites for cleanup, average annual per-site expenditures for sites with unacceptable exposure have been considerably higher than for sites with unknown exposure or for sites where EPA has determined that human exposure is under control. For example, in fiscal year 2009, the average per-site expenditure for sites with unacceptable human exposure was $3.0 million, compared with $0.5 million for sites with unknown exposure and $0.2 million for sites where EPA has determined that human exposure is under control. Furthermore, this difference has been increasing over time, as shown in figure 6. One reason that average per-site expenditures are higher for sites with unacceptable human exposure than for other sites is that a larger percentage of these sites are megasites—sites with actual or expected total cleanup costs, including removal and remedial action costs, that are expected to amount to $50 million or more. While 47 percent of the sites with unacceptable human exposure are megasites, 13 percent of sites with unknown human exposure are megasites, and 8 percent of sites where human exposure is controlled are megasites. Despite the relatively high level of expenditures at sites with unacceptable human exposure, EPA regional and headquarters officials told us that construction has not been conducted in the most time and cost efficient manner at some of these sites because EPA had to balance annual resources among various program activities. For example, EPA officials told us that at the Bunker Hill Mining site in Idaho—where people can be exposed to metals in soil and sediments and where children’s blood lead levels have been found to be above Centers for Disease Control and Prevention levels of concern—the pace of the cleanup had to be slowed down because of preconstruction and remedial action funding limitations. The site received between $13 million and $19 million per year from fiscal years 2003 to 2009, when, according to an EPA regional official, it could have used $30 million per year to clean up the site and control human exposure in the most efficient manner. Similarly, at the Eureka Mills site in Utah, people who are in contact with soil and dust contaminated with lead from mining activities face human health risks. From 2003 to 2008, the site received $6.6 million to $10 million a year for construction, even though regional officials said that an additional $3 to $5 million per year would have allowed them to complete construction at the site 3 to 4 years earlier at a reduced overall cost. However, with the addition of $26.5 million for the Eureka Mills site in fiscal year 2009 from Recovery Act funding, officials said that they will be able to complete construction at least 1 year earlier than planned and control human exposure at the site. In response to our survey, EPA regional officials noted that they are using Recovery Act funding to partially or completely control the unacceptable human exposure at 20 NPL sites. However, despite EPA’s use of Recovery Act funds to control human exposure at these sites, EPA officials noted that EPA’s constrained funding had delayed the control of human exposure at some sites. EPA’s annual costs for conducting remedial construction at nonfederal NPL sites that are not yet construction complete from fiscal years 2010 through 2014—as estimated by EPA regional officials—exceed recent annual funding allocations for these activities. In addition, these estimates do not include costs for all remedial actions at all sites or costs for sites that have a responsible party who is currently funding remedial actions but may be unable to do so in the future. Furthermore, according to EPA officials, experience has shown that EPA’s actual costs are almost always higher than its cost estimates. EPA’s annual costs to conduct remedial construction in the most efficient manner at nonfederal NPL sites for fiscal years 2010 through 2014 may range from $335 million to $681 million, according to EPA regional officials’ estimates (see table 1). These estimates include EPA’s costs to conduct remedial actions at 142 of the 416 nonfederal sites that are not construction complete. Of the remaining 274 sites, EPA regional officials were unable to provide cost estimates for 57 sites, expect responsible parties to fully fund remedial actions at 206 sites, and do not expect to incur additional costs to complete construction at 11 sites because these sites are already fully funded. These annual cost estimates for remedial construction at these sites exceed past annual funding allocations for such actions. For example, EPA regional officials’ cost estimates for remedial construction for the next 2 years—fiscal years 2011 to 2012—are $253 million to $414 million greater than the $267 million in annual funding that EPA allocated for remedial actions in fiscal year 2009. From fiscal years 2000 to 2009, EPA allocated $220 million to $267 million in annual funding for remedial actions. According to EPA headquarters officials, however, funds from additional sources—such as prior year funds, settlements with responsible parties, and state cost share agreements—may also be available to fund remedial construction from year to year. While the amount of funding available through these sources may vary substantially from year to year, according to EPA headquarters officials, approximately $123 to $199 million was available from additional sources for remedial actions in fiscal years 2007 to 2009. Our analysis indicates that, even if this level of funding was available in future years, it would not supplement EPA’s annual funding allocation enough to cover the estimated costs for conducting remedial construction in fiscal years 2011 and 2012. Therefore, despite funding from additional sources, EPA’s estimated costs to conduct remedial construction will exceed available funds if funding for remedial construction remains constant. EPA regional officials’ cost estimates are likely understated. These officials were not able to provide annual construction cost estimates for 57 of the 416 nonfederal sites that are not yet construction complete because they are in the early stages of the remedial process, and EPA does not yet know the extent of the contamination and/or has not chosen a cleanup remedy for them. For example, EPA Region 9 officials said that, as of October 2009, the feasibility study for the Alark Hard Chrome site in California was just beginning and that no cost estimates were available for possible remedies. For some additional sites, EPA regional officials were unable to provide cost estimates for construction at some of the operable units at the site. For example, EPA Region 3 officials were able to provide a partial cost estimate for the Crossley Farm site in Pennsylvania and noted that this estimate did not include additional remedial construction funding that will be necessary for operable units that have construction work remaining. Finally, EPA regional officials’ estimates did not include costs for conducting long-term response actions—such as operating groundwater treatment facilities—that are considered part of the remedial action or for performing 5-year site reviews, both of which EPA funds from its remedial action allocation and which would, therefore, increase the cost estimate for remedial actions. EPA’s estimates also do not include construction costs for sites that currently have a potentially responsible party that may be unable to fund the cleanup. EPA officials told us that EPA has identified one or more potentially responsible parties at 206 of the 416 nonfederal NPL sites that are not yet construction complete. However, officials also said that they were slightly or not at all confident that a responsible party would fund future remedial actions at 27 of these sites. For example, EPA officials explained that the responsible parties at one site in EPA Region 4 entered into bankruptcy and that EPA is not at all confident that the responsible parties will be able to fund future remedial actions. While in some cases funds from a settlement agreement may be available for site cleanup, in several instances, EPA officials reported that responsible parties may be financially unable to perform the remedy or fund future cleanup. Without responsible parties to fund remediation costs at these sites, EPA is likely to bear the costs of future remedial actions. EPA headquarters and regional officials also told us that EPA’s actual costs for construction are typically higher than its cost estimates because of a number of uncertainties they may encounter. Most importantly, according to EPA officials, the extent of contamination at a site may be greater than EPA expected when it developed the cost estimate, which can expand the scope of work and remedies needed and increase overall construction costs. For example, we recently reported that at the Federal Creosote Superfund site in New Jersey, the greater-than-expected quantities of contaminated material contributed to a $111 million increase in construction costs over EPA’s estimates. According to EPA officials, it is common for EPA to remove more soil than originally estimated at Superfund sites because of the uncertainty inherent in using soil samples to estimate the extent of underground contamination. Another factor that can increase construction costs is change in acceptable contaminant levels. For example, at the Arsenic Trioxide site in North Dakota, additional cleanup was necessary after the site had already been deleted from the NPL because EPA subsequently reduced the maximum contaminant level for arsenic in drinking water, which had the effect of changing the level at which the cleanup was considered protective of public health. In addition, according to an EPA official, the actual costs of goods and services—such as energy, construction materials, and labor— may increase above estimated prices, causing an increase in the actual construction cost. At the Escambia Woods site in Florida, for example, inclement weather, identification of additional contamination, and other unforeseen occurrences all contributed to increased cleanup costs of about $2.2 million. EPA officials noted that there may be some instances when construction costs are overestimated because, for example, there is less contamination at a site than previously thought or the prices of goods and services decrease; however, the officials commented that this is rare. Because of the many uncertainties in cost estimating, EPA officials told us that actual construction costs never equal the cost estimated in the ROD. According to EPA guidance, because of the inherent uncertainty in estimating the extent of site contamination from early investigation data, cost estimates prepared during the remedial investigation/feasibility phase are based on a conceptual rather than detailed idea of the remedial action under consideration. The guidance states that these estimates are, therefore, intended to provide sufficient information for EPA to compare alternatives on an “order of magnitude” basis, rather than an exact estimate of a particular remedy’s costs. According to EPA headquarters officials, these estimates could vary by 100 percent from the actual costs of implementing a remedy. As EPA’s estimates become more refined during the remedial design phase, estimates that vary from actual costs by 100 percent are not common; however, variation by 20 to 40 percent is common, according to EPA headquarters officials. The frequent occurrence of additional unexpected costs further enhances the likelihood that EPA’s costs for remedial actions over the next several years will exceed recent funding levels for these activities, and EPA may be forced to choose between funding construction at some sites in the most efficient manner or funding construction at more sites less efficiently. EPA headquarters allocates funds to the regions for preconstruction activities—remedial investigations, feasibility studies, and remedial design activities—which the regions then distribute among sites. For remedial action funding, headquarters works with the regions to allocate funds to sites. According to EPA headquarters and regional officials, the funds for both types of activities have not been sufficient to clean up some sites in the most time and cost efficient manner. EPA headquarters determines the amount of resources that the Superfund program will allocate to the regions for preconstruction activities by using a model that distributes available funding based on a combination of historical allocations and a work-based scoring system that scores each region based on projects planned for the upcoming year. Regions then prioritize sites to receive funding for preconstruction activities primarily by considering the human exposure risks present at sites while, at the same time, attempting to provide some funding for all their sites to keep them progressing toward the construction phase, according to EPA regional officials. According to EPA’s Superfund Program Implementation Manual, at the initiation of the planning process, headquarters provides general projections of funding for preconstruction activities that will be available to the regions. On the basis of these projections, each region then develops a plan for allocating these funds to sites. Before finalizing this plan, each region holds planning discussions with headquarters to discuss actions that can be accomplished during the year and alters its plans, as needed, based on refined projections of available funding from headquarters. To allocate funding for remedial actions, EPA headquarters, in consultation with the regions, determines funding priorities on a site-by- site basis. EPA’s Superfund Program Implementation Manual states that sites with ongoing construction receive priority for funding over new construction work. Headquarters develops the initial plan for ongoing construction based on regional funding requests, projections of available funding, and discussions with regional officials. As part of these discussions, EPA headquarters and regional officials determine whether and how to incrementally fund remedial actions, according to EPA headquarters officials. According to an EPA headquarters official, headquarters’ goal in allocating funds is to ensure that all sites with ongoing construction continue to progress toward construction completion while also funding some new construction projects. EPA officials explained that demobilizing and remobilizing equipment and infrastructure at a site once construction has begun is costly and an inefficient use of resources. Therefore, if EPA cannot fully fund ongoing construction at a site, the agency attempts to fund the site at a level that maintains at least a minimal level of construction to avoid demobilizing equipment and infrastructure. In addition, EPA headquarters works with the regions to adjust the amount of funding provided to sites throughout the year as cleanup circumstances change. For new construction, EPA’s National Risk-Based Priority Panel— comprising EPA regional and headquarters program experts—evaluates the risk with respect to human health and the environment to establish funding priorities for all new construction projects in the remedial program. To evaluate sites, the panel uses five criteria and associated weighting factors to compare projects. These criteria are the extent of risks to the exposed human population; contaminant stability; contaminant characteristics; threat to a significant environment; and program management considerations, such as state involvement and high- profile projects. Using the priority ranking process ensures that funding decisions for new projects are based on the use of common evaluation criteria that emphasize risk to human health and the environment. In addition to annual funding, EPA’s Superfund program received $600 million in Recovery Act funds in fiscal year 2009 and allocated $582 million for remedial cleanup activities. EPA officials explained that EPA prioritized these Recovery Act funds in a manner similar to that for annual remedial action funding, with funds targeted first toward sites with ongoing construction and then toward new projects that were construction-ready. According to EPA officials, when identifying sites to receive Recovery Act funding, EPA also considered additional factors, such as the jobs that could be created. However, EPA officials noted that identifying the number of jobs created was difficult and that the criteria in the Office of Management and Budget’s initial guidance for disbursing Recovery Act funds were not clear on how to calculate the number of jobs created. Therefore, EPA officials said that they used the ability to spend funds quickly as a surrogate for creating and retaining jobs when prioritizing sites to receive Recovery Act funds. Furthermore, EPA officials noted that it is difficult to quantify the number of jobs created because, while contractors involved in site remediation reported data on jobs created, subcontractors did not. EPA ultimately chose 51 sites to receive Recovery Act funding. According to EPA, 25 of these sites received funding for ongoing construction, 24 received funding for new construction, and 2 received funding for both ongoing and new construction. EPA officials reported that the use of Recovery Act funding will decrease the overall cleanup costs at some sites and accelerate the pace of cleanup at a majority of the sites receiving this funding. At the Gilt Edge Mine site in South Dakota, for example, EPA officials noted that construction for a portion of the cleanup project should be completed 1 year ahead of schedule because EPA allocated $3.5 million in Recovery Act funds to the site. Appendix IV provides additional details about sites that received Recovery Act funding. EPA officials from several regions told us that their regions currently receive about half or less than half of the funding they could use for preconstruction activities. For example, Region 2 officials said that their region currently receives about half the preconstruction funding it could use and that officials try to be flexible and creative in using the funding the region does receive to conduct work in the most efficient manner possible. Several EPA officials noted that limited funding available for preconstruction activities not only extends the length of time it takes to prepare a site for construction, but it can ultimately increase the overall costs for cleaning up the site as well. According to our survey, which collected data on fiscal years 2000 through 2009, most regions have sites that have experienced delays in the preconstruction phase because of insufficient funding. For example, officials in Region 3 noted that the Jackson Ceramics site located in Pennsylvania was delayed in fiscal year 2005 because, when prioritizing sites to receive funding for preconstruction activities, the Jackson Ceramics site was considered lower risk compared with other sites in the region and, therefore, received no funds. Instead, Region 3 funded other sites that posed a higher risk or were farther along in the preconstruction phase. In addition, Region 10 did not fully fund preconstruction activities at the Bunker Hill Mining site in Idaho from fiscal year 2003 to fiscal year 2009—which extended work schedules and stopped some design work—because of a lack of funding for preconstruction activities. Region 10 officials explained that the region reduced funding for preconstruction activities at this site so that the region could allocate funding across more sites in the region. As previously discussed for sites with unacceptable human exposure, sites with ongoing construction have experienced delays caused by limited funding, according to EPA officials. Since fiscal year 2000, most regions have experienced delays because of insufficient funding at one or more sites with ongoing construction, according to responses to our survey. For example, the Oronogo-Duenweg Mining Belt site in Missouri received $10 million a year in fiscal years 2008 and 2009 instead of the $15 million that regional officials said they could have used to clean up the site in the most efficient manner. These officials reported that the limited funding has delayed the completion of the remedial action and resulted in significant cost increases. In addition, the New Bedford Harbor site in Massachusetts has received $15 million per year instead of the $50 to $80 million per year that a regional official said the region could use to complete construction in the most efficient manner. According to several EPA regional officials, delays in funding for sites with ongoing construction increase the length of time it takes to clean up a site; the total cost of cleanup; and, in some cases, the length of time populations are exposed to contaminants. In addition, funding limitations have caused delays at sites that were ready to begin new construction. According to EPA Superfund Accomplishment Reports, between fiscal years 2004 and 2008, 54 sites, or over one-third of all sites ready for new construction funding, were not funded in the year that they were ready to begin construction, and some sites were not funded for several years after they were construction-ready. For example, in Region 4, funding limitations caused a 2-year delay at the Sigmon’s Septic Tank Service site in North Carolina—a site with potential exposure risks to residents and trespassers from contaminated soil—even though it was ready to begin construction in October 2007. EPA allocated Recovery Act funding to this site in September 2009, which allowed EPA to remove the contaminated soil, eliminating the threat of direct contact to nearby residents and trespassers at the site. According to EPA headquarters officials, 25 sites needing new construction funding in fiscal year 2009 would most likely not have received funding had Recovery Act funding not been available. A representative from the Association of State and Territorial Solid Waste Management Officials pointed to the Superfund program’s ability to quickly absorb about $582 million in Recovery Act funds as evidence of limited funding for construction activities. Limited funding can also impact state cleanup programs, which sometimes take the lead in cleaning up seriously contaminated sites that are not listed on the NPL, according to EPA and state officials. A study conducted by the Association of State and Territorial Solid Waste Management Officials found that funding for prelisting activities offers benefits beyond the Superfund program by providing valuable data, such as the data obtained during prelisting site assessments and investigations, which help state cleanup programs remediate sites that are not listed on the NPL. Several state officials said that, because their states have received less funding from EPA for these investigations than in the past, the number of assessments they have been able to perform has been limited. Most of the EPA regional officials and state officials we interviewed told us they expect the number of sites listed on the NPL over the next 5 years will be greater than the number listed in the past 5 years. However, neither EPA regional officials nor state officials were able to provide cost estimates for many of the sites they expect will be added to the NPL. EPA regional officials estimate that from 101 to 125 sites—an average of 20 to 25 sites per year—will be added to the NPL over the next 5 years. This is higher than the 79 sites—an average of about 16 sites per year— added from fiscal years 2005 to 2009. Overall, our analysis of these estimates shows that listings could increase by 28 to 58 percent. As table 2 shows, all EPA regions expect that the number of sites added to the NPL over the next 5 years from their region could increase. According to EPA headquarters officials, the number of sites proposed for listing over time has decreased as a result of the expanded use of other cleanup programs, including state programs. Most of the officials who expect an increase in listings noted that current economic conditions—which can limit states’ abilities to clean up sites under their own programs and responsible parties’ abilities to pay for cleanup—are a contributing factor to the expected increase in listed sites. Most of the officials we spoke with in the 10 selected states also expect that the number of sites listed from their states over the next 5 years could increase above the number of sites listed over the past 5 years, as table 3 shows. For example, officials from the Michigan Department of Natural Resources and Environment said that they expect EPA to list five sites from Michigan to the NPL over the next 5 years, even though no sites have been listed from their state since 1996. These officials noted that the Superfund program has traditionally been a program of last resort, but declining resources in their state’s cleanup program have renewed Michigan’s interest in cleaning sites up through the federal program. Similarly, while EPA did not list any sites from Maine over the past 5 years, officials from the Maine Department of Environmental Protection expect that one to two sites may be added to the NPL over the next 5 years. An official explained that potential bankruptcies by responsible parties at one site may require that the state seek assistance in cleaning up the site through the federal Superfund program. EPA and state officials noted that the number of sites actually listed over the next 5 years could vary from their projections because of a number of uncertainties. For example, all the EPA regional officials we spoke with told us that economic conditions can affect the number of sites added to the NPL, and several of these officials told us that the number of sites listed from their region could increase above their projection if economic conditions do not improve. Many EPA regional officials noted that sites currently being cleaned up under state programs and by responsible parties may require assistance through the federal Superfund program if these groups face financial hardship, such as bankruptcy. In addition, some EPA and state officials identified EPA’s policy for obtaining state concurrence for listing as a factor that could limit the number of sites added to the NPL if EPA is unable to obtain this concurrence. Officials from several EPA regions noted that particular states are resistant to listing because of financial or political concerns, and a few EPA regional officials and state officials mentioned difficulty in obtaining state concurrence for some sites. In addition to the number of sites that could be listed, the number of sites eligible for the NPL could increase if EPA begins to assess, as a part of its listing process, the risk of vapor intrusion caused by subsurface hazardous substances that have migrated via the air into homes and commercial properties. Although sites with vapor intrusion can pose considerable human health risks, EPA’s HRS—the mechanism used to identify sites that qualify for NPL listing—does not currently recognize these risks; therefore, unless a site with vapor intrusion is listed on some other basis— such as groundwater contamination, EPA cannot clean up the site using remedial program funding. Many EPA regional officials and state officials noted that vapor intrusion is a concern, and several of these officials told us that they believe additional sites would be eligible for listing if assessments of vapor intrusion are included as part of the listing process. According to an EPA headquarters official, based on recent discussions with regional officials, up to 37 sites could be eligible for NPL listing if EPA includes vapor intrusion assessments as part of the listing process. However, according to EPA headquarters officials, EPA must first determine whether or not it can consider the vapor intrusion pathway under its existing HRS regulations, and it has not yet made such a determination. While these sites are not currently eligible for NPL listing, the EPA headquarters official noted that EPA is addressing vapor intrusion at 13 of these sites through its Superfund removal program; however, this official also told us that, when conducting removal actions, EPA is limited in its ability to fully remediate the source of contamination. For example, according to an official from the Montana Department of Environmental Quality, preliminary data collected at the Billings PCE site—which the official noted is not eligible for NPL listing—indicated vapor intrusion in buildings, and EPA conducted a removal action at this site. However, according to this official, it is unclear whether the removal action was effective in mitigating the vapor intrusion contamination, and people may continue to be exposed. In addition, this official noted that Montana has many sites with vapor intrusion from contaminants such as chlorinated solvents, which can cause cancer. If EPA cannot list these sites on the NPL on another basis, EPA will not be able to fund remedial actions at these sites, and continued exposure to carcinogens is possible if other cleanup programs do not remove the risks at these sites. In November 2002, EPA issued draft guidance on evaluating vapor intrusion at NPL sites. However, a December 2009 EPA Inspector General’s report found that EPA had not updated this guidance to reflect current science and recommended that EPA issue final guidance to establish current agency policy on the evaluation and mitigation of vapor intrusion risks. EPA headquarters officials told us that, in response to this report, EPA is beginning discussions to update the vapor intrus ion guidance. Neither EPA regional officials nor state officials we contacted were able to provide cost estimates for many of the sites they expect to be added to the NPL over the next 5 years. Furthermore, when these officials were able to provide cost estimates, most of them were imprecise figures based on limited knowledge and best professional judgment. For example, while New Jersey officials expect 15 to 25 sites to be added to the NPL from their state over the next 5 years, these officials noted that most of these sites are not expected to be megasites and the average cost of cleaning up most of the sites will probably be around $10 to $25 million. Officials also explained that they could not provide cost estimates for some of the sites, because the type and extent of contamination is not yet known. In addition, some officials based their 5-year projection on past listings and have not identified the actual sites that may be listed. For example, officials with the Virginia Department of Environmental Quality noted that one site in Virginia could be listed over the next 5 years, but the officials could not provide an estimated cost for cleaning up this site because it has not yet been identified. Therefore, it is impossible to accurately estimate what the cost may be to clean up these unknown sites. While EPA regional officials and state officials were not able to provide cost estimates for many of the sites they expect to be added to the NPL, we reported in July 2009 that the average amount EPA spent to clean up individual sites has increased in recent years. In that report, we found that individual site costs may have increased because the sites on the NPL now are more complex than in the past, construction costs have been rising, and EPA has not been able to identify as many responsible parties to fund site cleanups as in the past, leaving a higher share for EPA to fund. Congress enacted CERCLA to decrease the risk to human health and the environment posed by hazardous waste sites. However, some sites that EPA has identified as among the most seriously contaminated have involved long and costly cleanups, leading to protracted risks of human exposure to hazardous substances. Not long after the authority for the taxes that served as its main source of revenue expired in 1995, the Superfund trust fund started to diminish. Further, appropriated funding for cleanups has declined over time in real dollars, and the limited funding has caused delays in cleaning up some sites in recent years. The limited funding, coupled with increasing costs of cleanup, has forced EPA to choose between cleaning up a greater number of sites in a less time and cost efficient manner or cleaning up fewer sites more efficiently. Compounding these challenges, EPA may not be listing some sites that pose health risks that are serious enough that the sites should be considered for inclusion on the NPL. While EPA is assessing vapor intrusion contamination at listed NPL sites, EPA does not assess the relative risks posed by vapor intrusion when deciding which sites to include on the NPL. By not including these risks, states may be left to remediate those sites without federal assistance, and given states’ constrained budgets, some states may not have the ability to clean up these sites on their own. Ultimately, assessing the relative risk of vapor intrusion could lead to an increase in the number of sites listed on the NPL and thereby place additional demands on already limited funds in the Superfund program. However, if these sites are not assessed and, if needed, listed on the NPL, some seriously contaminated hazardous waste sites with unacceptable human exposure may not otherwise be cleaned up. To better identify sites that may be added to the NPL, we recommend that the Administrator of EPA determine the extent to which EPA will consider vapor intrusion as part of the NPL listing process and how this will affect the number of sites listed in the future. We provided a draft copy of this report to EPA for review and comment. We received a written response from the Assistant Administrator for the Office of Solid Waste and Emergency Response that also included comments from EPA’s Office of Enforcement and Compliance Assurance and Office of the Chief Financial Officer. EPA agreed with our recommendation and noted that, while the agency currently considers vapor intrusion impacts in both the remedial and removal programs, EPA is evaluating whether vapor intrusion needs to be more specifically addressed in the HRS model. EPA also noted that our report contains substantial useful information on very important subjects relating to the Superfund Program. In its comments, EPA also noted two issues that it believed require additional clarification. First, regarding its human exposure measure, EPA stated that it is important to highlight that people are not typically in danger of immediate harm at sites with unacceptable human exposure. EPA explained that, when acute health threats are identified, the agency takes immediate action to address the threats using its removal authority and, in other situations, works to characterize the risks at these sites. We agree with EPA and note in our report that EPA conducts removal actions at sites that pose immediate threats to human health or the environment. We also note that EPA has plans to control human exposure at all sites with unacceptable human exposure. EPA also commented that it does not use the term “unknown” when referring to sites that it has identified as having “insufficient data to determine human exposure control status.” EPA noted that this term does not reflect EPA’s efforts in characterizing a site to determine whether people are exposed at unsafe levels at a site. While we recognize that EPA may have collected and analyzed some data regarding a site’s human exposure status, EPA’s determination of insufficient data to determine human exposure control status shows that it has not yet made a determination about a site’s status. For this reason— and for ease of reporting—in this report we refer to EPA’s determination of “insufficient data to determine human exposure control status” as “unknown” human exposure. Second, EPA recognized our report’s finding that regional cost estimates are likely understated, since the estimates do not include funding for sites where a responsible party is currently funding remedial construction but may be unable to do so in the future. While EPA does not dispute that the regional cost estimates are likely understated, EPA believes that we should recognize that, in cases where responsible parties are conducting remedial construction under existing settlement agreements, those agreements require those parties to maintain financial assurance mechanisms to ensure that response actions are completed. In addition, EPA noted that it has made considerable efforts to ensure that these mechanisms are in place for existing and new response settlements, and these financial assurances would provide funding for cleanup under existing settlements. EPA also acknowledged, however, that for sites where potentially responsible parties are experiencing financial difficulty and have not yet reached a settlement with EPA, the parties may be unable to complete cleanups in the future, which would increase the burden on EPA’s Superfund trust fund. We agree with EPA’s assessment; however, in response to our survey, EPA regional officials told us that they were slightly or not at all confident that a responsible party would fund future remedial actions at 27 sites. We also state in our report that funds from a settlement agreement may be available for site cleanup at some sites, but regional officials told us that responsible parties may be financially unable to perform the remedy or fund future cleanup at other sites and, in those situations, EPA’s trust fund may have to fund future cleanup. EPA’s comments are presented in appendix V of this report. EPA also provided technical comments on the draft report, which we incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, Administrator of EPA, Director of the Office of Management and Budget, and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. This appendix provides information on the scope of work and the methodology used to determine (1) the cleanup and funding status at currently listed nonfederal National Priorities List (NPL) sites with unacceptable or unknown human exposure; (2) what is known about the future costs to the Environmental Protection Agency (EPA) to complete remedial actions at nonfederal NPL sites that are not construction complete; (3) the process EPA uses to allocate remedial program funding; and (4) how many sites EPA and selected state officials expect will be added to the NPL over the next 5 years, and what they expect the future costs of cleaning up those sites will be. To determine the cleanup and funding status at the 75 sites with unacceptable human exposure and the 164 sites with unknown human exposure, we surveyed branch chiefs from each of the 10 EPA regions and received responses from October 2009 through November 2009. Through our survey, we obtained and analyzed information from each of the regions on the cleanup work that remains, human exposure risks, short- term planned actions to reduce exposure, long-term actions needed to eliminate exposure, expected time until human exposure risks will be under control or known, future estimated costs of remedial actions, whether American Recovery and Reinvestment Act (Recovery Act) funding will be used to control human exposure, delays due to constrained funding, and the impact of any limited funding at these sites. In addition, we obtained some limited documentation to support regional officials’ cost estimates provided in the survey. We also analyzed data from EPA’s Comprehensive Environmental Response, Compensation, and Liability Information System (CERCLIS) to determine when sites were listed, what cleanup actions have been taken at sites, which sites are construction complete, and which sites are megasites. We analyzed expenditure (outlay) data from EPA’s Integrated Financial Management System for all final and deleted nonfederal NPL sites to determine how much EPA has spent on these sites. Moreover, to obtain additional information on human exposure risks, we searched EPA’s Superfund Site Information System. We analyzed data on exposure risks from our survey and the Superfund Site Information System to determine the types of contaminants present, the types of contaminated media present, and the exposed populations at the sites. We also discussed the human exposure indicator with EPA headquarters and regional officials and reviewed EPA guidance on this indicator. To determine what is known about future costs to EPA to complete remedial actions at nonfederal NPL sites, we collected data through our survey of all EPA regions to obtain information about the 416 nonfederal sites that are not construction complete. Through our survey, we obtained and analyzed data on annual and total estimated costs to EPA to conduct remedial actions in the most efficient manner, the entity responsible for funding cleanup, and EPA’s confidence in responsible parties’ ability to fund future remedial actions. In addition, we obtained information on total funding amounts that EPA provided for remedial actions for fiscal years 2000 to 2009 from EPA’s Office of Solid Waste and Environmental Response. Finally, we discussed the cost estimating process with EPA headquarters and regional officials and reviewed EPA’s guidance on cost estimating. To determine how EPA allocates remedial program funding, we interviewed EPA headquarters officials and regional officials from each of the 10 EPA regions about the process they use to prioritize sites to receive funding. We also discussed the process EPA used to allocate Recovery Act funding for the Superfund program with headquarters officials. Additionally, we reviewed EPA guidance and planning documents to identify the process for assigning annual and Recovery Act funding. In addition, through our survey, we obtained and analyzed information from each of the 10 EPA regions on the 51 sites receiving Recovery Act funding to determine how much funding each site received and whether the use of the funding is decreasing costs of cleanup and/or accelerating cleanup. We also obtained data through our survey on delays at sites with ongoing construction. Moreover, to identify sites that were delayed when ready to begin construction, we reviewed Superfund Accomplishment Reports from 2004 through 2008. In addition, we spoke with representatives from the Association of State and Territorial Solid Waste Management Officials to obtain their perspectives on delays in cleanup. To determine how many sites EPA officials expect will be added to the NPL over the next 5 years and what they expect the cost of cleaning up those sites to be, we conducted semistructured telephone interviews of NPL coordinators in each EPA region. In addition, through these interviews, we obtained information about factors that have affected the number of listings in the past and factors that may affect the number of listings in the future. We also interviewed EPA headquarters officials to obtain their perspectives on future listings and factors—including vapor intrusion—that may affect listings. Finally, to compare the projected numbers of future listings with past listings, we analyzed data from EPA’s CERCLIS database on sites that have been listed to the NPL from each region. To determine how many sites selected state officials expect will be added to the NPL over the next 5 years and what they expect the cost of cleaning up those sites to be, we interviewed state hazardous waste agency officials from 10 states: California, Iowa, Kentucky, Louisiana, Maine, Michigan, Montana, New Jersey, Virginia, and Washington. We selected these states using a nonprobability sample, consisting of one state from each of EPA’s 10 regions and selected to ensure that we would obtain information from states that vary in the total number of sites listed over the past 10 years. We conducted telephone interviews with officials from each of these states to obtain information about potential site listings from their state, the costs to clean up those sites, and factors that may affect the number of sites actually listed over the next 5 years. We also discussed the site assessment process, listing process, and potential future listings with an official from the Association of State and Territorial Solid Waste Management Officials. Finally, we compared the projected numbers of future listings with past listings by analyzing data from EPA’s CERCLIS database on sites that have been listed to the NPL from each of the 10 states. To assess the reliability of the data from EPA’s databases used in this report, we analyzed related documentation, examined the data to identify obvious errors or inconsistencies, and worked with agency officials to identify data problems. To ensure the reliability of the data collected through our survey of the 10 EPA regions, we took a number of steps to reduce measurement error, nonresponse error, and respondent bias. These steps included conducting three pretests prior to distributing the survey to ensure that our questions were clear, precise, and consistently interpreted; reviewing responses to identify obvious errors or inconsistencies; and conducting follow-up interviews with officials to review and clarify responses. We determined the data to be sufficiently reliable for the purposes of this report. We conducted this performance audit from March 2009 to May 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We surveyed regional officials from EPA’s 10 regions using all of the questions below as stated here. We provided these questions to the regions in an Excel spreadsheet that identified the sites pertaining to each question. We have grouped the questions below to list all questions that pertain to a particular universe of sites. A. The following questions pertained to nonfederal NPL sites that were not construction complete, as of September 30, 2009. 1. Who is currently leading remedial actions at this site? If the site has not yet had a remedial action, who is anticipated to lead remedial actions at this site? a. Potentially Responsible Party(s) b. EPA (Fund-lead) c. State d. Federal Facility e. Mixed (Potentially Responsible Party & Fund-lead) f. Mixed (Potentially Responsible Party & State) g. Mixed (Potentially Responsible Party & Federal) 2. How confident are you that a viable Potentially Responsible Party(s) will fund future remedial actions at this site? a. Very confident b. Moderately confident c. Slightly confident d. Not at all confident/No viable Potentially Responsible Party(s) e. Don’t know 3. What is the projected fiscal year the site will be construction complete? a. FY 2009 b. FY 2010 c. FY 2011 d. FY 2012 e. FY 2013 f. FY 2014 g. FY 2015 h. FY 2016 i. FY 2017 j. FY 2018 k. FY 2019 l. FY 2020 m. FY 2021 n. FY 2022 o. FY 2023 p. After FY 2023 4. Given what is currently known about contamination at this site, how much work remains to complete construction? a. No work remains b. Less than half the work remains c. About half the work remains d. More than half the work remains e. All work remains f. Unknown 5. Given what is currently known about contamination at this site, what is the approximate projected cost to EPA to complete construction in the most efficient manner? (in millions of dollars) If there is no cost to EPA because a Potentially Responsible Party is funding ALL remedial actions at a site, please check the box for No Cost to EPA. FY 2015 and beyond $ 6. What information are you using to make these cost projections? 7. If you cannot provide cost projections for one or more years, please explain why they are not available. 8. Is a Long Term Remedial Action planned at this site? If yes, please provide the estimated total cost to EPA of the LTRA (in millions). B. The following questions pertained to nonfederal NPL sites with unacceptable human exposure, as of September 30, 2009. 1. In what fiscal year did EPA determine that there was an unacceptable risk of human exposure at this site? a. Prior to FY 1999 b. FY 1999 c. FY 2000 d. FY 2001 e. FY 2002 f. FY 2003 g. FY 2004 h. FY 2005 i. FY 2006 j. FY 2007 k. FY 2008 l. FY 2009 m. Unknown 2. In what fiscal year do you expect human exposure to be controlled at this site? a. FY 2009 b. FY 2010 c. FY 2011 d. FY 2012 e. FY 2013 f. FY 2014 g. FY 2015 h. After FY 2015 i. Unknown 3. Please provide a description of the actual or potential for human exposure. For each site, please describe the actual or potential for current human exposure, including the physical setting, populations affected, exposure pathways, contaminants, and health risks, if known. 4. What are EPA or other parties doing in the short-term to contain the risk of exposure or the actual human exposure? 5. What will EPA or other parties do in the long-term to eliminate the risk of exposure or the actual human exposure? C. The following questions pertained to nonfederal NPL sites with unknown human exposure, as of September 30, 2009. 1. In what fiscal year did EPA determine that there was insufficient data to assess if there was an unacceptable risk of human exposure at this site? a. Prior to FY 1999 b. FY 1999 c. FY 2000 d. FY 2001 e. FY 2002 f. FY 2003 g. FY 2004 h. FY 2005 i. FY 2006 j. FY 2007 k. FY 2008 l. FY 2009 m. Unknown 2. In what fiscal year do you expect to know whether human exposure is under control at this site? b. FY 2010 c. FY 2011 d. FY 2012 e. FY 2013 g. FY 2015 h. After FY 2015 i. Unknown 3. Why is there insufficient data to determine whether human exposure is under control? 4. Please describe the potential for human exposure at this site. D. The following questions pertained to nonfederal NPL sites (1) with unacceptable human exposure, (2) with unknown human exposure, and/or (3) that were not construction complete, as of September 30, 2009. 1. From FY 2000 to FY 2009, for which years, if any, were pipeline activities delayed at this site due to constrained funding? 2. Did this site receive funding to begin construction in the fiscal year when it was ready? If not, for how many years did the site not receive construction funding? a) Yes - site received funding when construction ready b) No - construction delayed 1 year c) No - construction delayed 2 years d) No - construction delayed 3 years e) No - construction delayed 4 years f) No - construction delayed 5 years g) No - construction delayed more than 5 years h) N/A - Site has not reached construction phase i) N/A - Construction funded by Potentially Responsible Party(s) If cleanup was not funded when the site was construction-ready, please describe the impacts, if any, of the delay. 3. From FY 2000 to FY 2009, for which years, if any, were ongoing remedial actions delayed at the site due to constrained funding? a. For years in which ongoing remedial actions were delayed, how much funding was needed to clean up the site in the most efficient manner? (in millions) b. For years in which ongoing remedial actions were delayed, how much funding was received? (in millions) c. Please explain the source of the funding numbers in your responses for parts (a) and (b). d. What was the impact, if any, of the delay in cleanup? e. Have delays increased the total cost of construction at this site? Please briefly explain your response. E. The following questions pertained to nonfederal NPL sites which EPA designated to receive Recovery Act funds. 1. How much Recovery Act, or stimulus funding, has or will this site receive? Please respond in millions of dollars. 2. Will stimulus funds be used at this site for (1) beginning new construction at a site with no previous remedial actions, (2) beginning new construction at an operable unit at a site with previous remedial actions, and/or (3) supporting ongoing remedial actions? 3. Would construction have been delayed in the absence of this stimulus funding? Please choose an option below and briefly explain your response. a. Yes b. No c. Unknown 4. Will stimulus funds accelerate the pace of construction? Please choose an option below and briefly explain your response. a. Yes b. No c. Unknown 5. Will the stimulus funds decrease the total cost of construction at this site? Please choose an option below and briefly explain your response. a. Yes b. No c. Unknown 6. Will the use of stimulus funding control human exposure at this site? Please choose an option below and briefly explain your response. a. Yes, completely b. Yes, partially c. No d. Not applicable e. Unknown 7. Please describe your region’s involvement, if any, in identifying sites to receive stimulus funding. As of the end of fiscal year 2009, EPA identified 75 nonfederal sites on the NPL as having unacceptable human exposure. The human exposure at these 75 sites is due to a variety of contaminants that may be present in soil, groundwater, sediments, or other media at the site and may impact areas where people live, work, and recreate. As figure 7 shows, the most common medium of concern at sites with unacceptable human exposure is soil, with 42 sites containing this medium of concern. The next most common media are fish or shellfish, sediment, and groundwater. Many sites had more than one medium of concern. For example, the Caldwell Trucking Co. site in New Jersey has four media of concern: soil, groundwater, surface water, and indoor air. At this site, groundwater contaminated with solvents is seeping onto surface soils and discharging into surface-water streams in a residential area, and the solvents may have potentially migrated from groundwater to indoor air, posing a risk of vapor intrusion. The contaminants that most commonly cause unacceptable human exposure are lead, polychlorinated biphenyl (PCB), arsenic, and metals other than lead. Some sites contained several contaminants, present in different media. For example, the Atlantic Wood Industries, Inc. site in Virginia contains polycyclic aromatic hydrocarbons (PAHs), pentachlorophenol (PCP), dioxins, and heavy metals present in soil and shellfish, as well as creosote present in sediments. As a result of the variety of contaminants and contaminated media, there are multiple risks at the site, including risks to (1) recreational users of the river who could come into direct contact with sediments contaminated with creosote, (2) consumers of large quantities of shellfish exposed to unacceptably high levels of contaminants, and (3) workers at the Atlantic Wood Industries concrete manufacturing business who are exposed to surface soils at the site. Residents of contaminated areas are the population most commonly exposed to unacceptable exposures at the 75 sites, with over half of the sites posing a contaminant risk to residents on or nearby the site, according to the data collected through our survey. In addition, contaminated waterways, including rivers, lakes, and harbors, pose unacceptable risks to those who consume contaminated fish caught from these areas. Risks to workers and other commercial tenants and those who recreate at contaminated sites are present at fewer sites. The exposed populations face different health risks based on the contaminants present at the site. For example, consuming PCB in fish may cause liver disease, problems with the immune and endocrine system, developmental problems, and cancer, while human health threats from arsenic include irritation of the stomach and intestines, blood vessel damage, reduced nerve function, and increased mortality rates in young adults. According to an EPA headquarters official responsible for overseeing the human exposure indicator, the indicator demonstrates a high potential for human exposure, but it does not always indicate that documented human exposure is occurring at a site. The official explained that it can be difficult to obtain evidence of actual human exposure; however, EPA has been able to document exposure at some sites. For example, the Big River Mine Tailings site contains lead-contaminated soils on residential properties, and blood tests have shown elevated lead levels in children. For some risks, however, such as consumption of contaminated fish, EPA may not have evidence of actual ingestion of contaminated fish but does have information suggesting that people are fishing in the area of concern. Table 4 provides a description of the human exposure risks at the 75 sites, as well as the fiscal year that EPA estimates human exposure will be under control at those sites. EPA identified 51 sites to receive Recovery Act funding. Table 5 provides the amount of Recovery Act funds EPA allocated to each site and the planned use of these funds. In addition to the individual named above, Vincent Price, Assistant Director; Deanna Laufer; Barbara Patterson; Kyerion Printup; and Beth Reed Fritts made key contributions to this report. Elizabeth Beardsley, Nancy Crothers, Pamela Davidson, Michele Fejfar, Carol Henn, and Mehrzad Nadji also made important contributions.
At the end of fiscal year 2009, the Environmental Protection Agency's (EPA) National Priorities List (NPL) included 1,111 of the most seriously contaminated nonfederal hazardous waste sites. Of these sites, EPA had identified 75 with unacceptable human exposure, 164 with unknown exposure, and 872 with controlled exposure that may need additional cleanup work. EPA may fund remedial actions--long-term cleanup--from its trust fund, and compel responsible parties to perform or reimburse costs of the cleanup. GAO was asked to determine (1) the cleanup and funding status at currently listed nonfederal NPL sites with unacceptable or unknown human exposure; (2) what is known about EPA's future cleanup costs at nonfederal NPL sites; (3) EPA's process for allocating remedial program funding; and (4) how many NPL sites some state and EPA officials expect to be added in the next 5 years, and their expected cleanup costs. GAO analyzed Superfund program data, surveyed and interviewed EPA officials, and interviewed state officials. At over 60 percent of the 239 nonfederal NPL sites with unacceptable or unknown human exposure, all or more than half of the work remains to complete the remedial construction phase of cleanup, according to EPA regional officials. By the end of fiscal year 2009, EPA had expended $3 billion on the 75 sites with unacceptable human exposure and $1.2 billion on the 164 sites with unknown exposure. Despite the relatively high level of expenditures at sites with unacceptable exposure, EPA officials told GAO that, in managing limited resources, some sites have not received sufficient funding for construction to be conducted in the most time and cost efficient manner. EPA's future costs to conduct remedial construction at nonfederal NPL sites will likely exceed recent funding levels. EPA officials estimate that EPA's costs will be from $335 to $681 million each year for fiscal years 2010 to 2014, which exceed the $220 to $267 million EPA allocated annually for remedial actions from fiscal years 2000 to 2009. In addition, these cost estimates are likely understated, since they do not include costs for sites that are early in the cleanup process or for sites where a responsible party is currently funding remedial construction but may be unable to do so in the future. Also, according to EPA officials, EPA's actual costs are often higher than its estimates because contamination is often greater than expected. EPA allocates funds separately for preconstruction activities--such as remedial investigation and remedial design--and remedial actions. EPA headquarters allocates funds for preconstruction activities to the regions for them to distribute among sites. For remedial actions, headquarters works in consultation with the regions to allocate funds to sites. EPA officials told GAO that EPA prioritized sites to receive the $582 million in American Recovery and Reinvestment Act funds in a manner similar to the way EPA prioritizes sites for remedial actions. Limited funding has delayed preconstruction activities and remedial actions at some sites, according to EPA officials. EPA regional officials estimated that from 101 to 125 sites--about 20 to 25 sites per year--will be added to the NPL over the next 5 years, which is higher than the average of about 16 sites per year listed for fiscal years 2005 to 2009. Most of the 10 states' officials GAO interviewed also expect an increase in the number of sites listed from their states. However, neither EPA regional officials nor state officials were able to provide cost estimates for cleaning up many of the sites. In addition, the number of sites eligible for listing could increase if EPA decides to assess the relative risk of vapor intrusion--contaminated air that seeps into buildings from underground sources--a pathway of concern among EPA regional officials and state officials interviewed. Although sites with vapor intrusion can pose considerable human health risks, EPA's Hazard Ranking System--the mechanism used to identify sites that qualify for NPL listing--does not recognize these risks; therefore, unless a site with vapor intrusion is listed on some other basis, EPA cannot clean up the site through its remedial program.
Title I of HIPAA contains standards for health insurance access, portability, and renewability, which apply to group (both self-funded and fully insured) and individual insurance market coverage. While some of the standards, such as guaranteed renewal of insurance coverage, apply equally to coverage offered in all markets, other standards do not. For example, HIPAA requires all products carriers offer in the small group market to be sold to any small employer that applies, but it does not extend the same requirement to the large group or individual markets.Similarly, HIPAA requires that certain individuals leaving group coverage be guaranteed access to coverage in the individual market—“group to individual guaranteed access.” However, no similar guarantees of access exist for people in the individual market who have coverage today but might lose it in the future. (App. I contains a summary of HIPAA access, portability, and renewability standards by market segment.) Three federal agencies—Labor, HHS, and the Treasury—are required to jointly develop and issue implementing regulations for HIPAA. Each agency has somewhat different responsibilities for ensuring compliance. Labor is responsible for ensuring that group health plans comply with HIPAA standards. This is an extension of its current regulatory role under the Employee Retirement Income Security Act of 1974 (ERISA). Treasury also enforces HIPAA requirements on group health plans, but does so by imposing an excise tax under the Internal Revenue Code. HHS is responsible for enforcing HIPAA provisions with respect to insurance carriers in the group and individual markets in states that do not already have similar protections in place and do not pass appropriate laws and substantially enforce them. This represents an essentially new role for that agency. The implementation of HIPAA is ongoing, in part, because the implementing regulations were on an interim final basis. Therefore, further guidance needed to finalize the regulations has not yet been issued. In addition, specific HIPAA provisions have varying effective dates. Although most of the provisions became effective on July 1, 1997, group-to-individual guaranteed access standards in 36 states and the District of Columbia were allowed to take effect as late as January 1, 1998. Finally, although all provisions are now in effect, individual group plans do not become subject to the law until the start of their plan year beginning on or after July 1, 1997. For some collectively bargained plans, this may not be until 1999 or later. During the first year of implementation, federal agencies, the states, and issuers have taken various actions in response to HIPAA. The federal agencies issued interim final regulations by the April 1, 1997, statutory deadline. Many considered this task to be a significant undertaking, and states and the insurance industry were generally pleased with the open and inclusive nature of the process. More regulations and guidance are expected to be issued in 1998. The agencies also conducted various educational outreach activities. For example, Labor sponsored a series of informational seminars for employers held in several large cities, created informational literature, and provided guidance on its Web page. HHS consulted with state insurance regulators at quarterly meetings of NAIC, held informational meetings for insurance industry representatives in at least two states where it will play an enforcement role, and also maintains a Web page containing information on HIPAA. Also during the first year, state legislatures have enacted laws to enforce HIPAA provisions locally, and state insurance regulators have written regulations and prepared to enforce HIPAA provisions. Issuers of health coverage have modified products and practices to comply with HIPAA. To ensure that individuals losing group coverage have guaranteed access—regardless of health status—to individual market coverage, HIPAA provides states with two different approaches. The first, which HIPAA specifies and which has become known as the “federal fallback” approach, requires all issuers who operate in the individual market to offer eligible individuals at least two health plans. (This approach became effective on July 1, 1997.) The second approach, the so-called “alternative mechanism,” grants states considerable latitude to use high-risk pools and other means to ensure guaranteed access. (HIPAA requires states that adopt this approach to have it implemented no later than Jan. 1, 1998. ) Among the 13 states that are using the federal fallback approach, carrier marketing activities and high premium prices may limit consumers’ ability to take advantage of this guarantee. Some carriers initially attempted to discourage consumers from applying for products with guaranteed access rights, and some are charging premiums 140 to 600 percent of the standard rate. In addition, widespread consumer misunderstanding of HIPAA guarantees of individual market coverage and the restrictions placed on those guarantees has also contributed to access problems. Under HIPAA, guaranteed access to coverage is restricted to eligible individuals who, among other criteria, had at least 18 months of coverage without a break of more than 63 days and with the most recent coverage obtained under a group health plan. Recognizing the controversial nature of this requirement and that many states had already passed reforms that could be modified to meet or exceed these requirements, HIPAA gave states the flexibility to implement this provision by using either the federal fallback or the alternative mechanism approach. Under the federal fallback approach, carriers have three options for offering eligible individuals guaranteed access to coverage. A carrier may offer (1) all of its individual market plans, (2) only its two most popular plans, or (3) two representative plans—a lower-level and a higher-level coverage option—which are explicitly subject to some mechanism for risk spreading or financial subsidization. Thirteen states use the federal fallback approach. In the 36 states and the District of Columbia that use an alternative mechanism, which was to become effective no later than January 1, 1998, the law allows a wide range of approaches as long as certain minimum requirements are met. For example, an eligible individual must have a choice between at least two different coverage options. Twenty-two of these states chose a state high-risk insurance pool to provide group-to-individual guaranteed access rights. Appendix II summarizes the different options states have chosen to provide group-to-individual guaranteed access rights. Some initial carrier marketing practices may have discouraged HIPAA eligibles from enrolling in products with guaranteed access rights. After the federal fallback provisions took effect on July 1, 1997, many consumers complained to state insurance regulators that carriers did not disclose the fact that a product with HIPAA guaranteed access rights existed or, when the consumers specifically requested one, they were told that the carrier did not have such a product available. One state regulator we visited said that some carriers told consumers HIPAA products were not available because the state had not yet approved them. However, the regulator had notified all carriers that such products were to be issued starting July 1, 1997, regardless of whether the state had yet approved them. Soon after July 1, some carriers had also refused to pay commissions to insurance agents who referred HIPAA eligibles. In two of the three federal fallback states we visited, insurance regulators told us that some carriers were advising agents against referring HIPAA-eligible applicants, or paying reduced or no commissions. Because consumers often use insurance agents to access the individual insurance market, an economic incentive to steer individuals away from guaranteed access products could significantly reduce consumer access to them. Several states have challenged this practice under state fair marketing practice laws. HHS officials looked into reports of such practices and learned of about 10 carriers that had reduced or eliminated agent commissions for HIPAA eligibles. Responding to pressure from state insurance regulators, two of these carriers have resumed paying commissions, and the other eight, according to the officials, appear to be wavering. Since finding these initial 10, HHS officials have not heard of other carriers refusing to pay agent commissions. Premiums for products with guaranteed access rights may be substantially higher than standard rates. In several of the 13 federal fallback states, anecdotal reports from insurance regulators and agents suggest that rates range from 140 to 600 percent of the standard rate. Rates charged by several individual market carriers in the three federal fallback states we visited ranged from 140 to 400 percent of the standard rate, as indicated in table 1. Carriers charge higher rates, in part, because they believe HIPAA-eligible individuals will, on average, be in poorer health and hence would likely have higher medical costs. In addition, carriers that do not charge higher premiums to HIPAA eligibles could be subject to adverse selection. That is, once a carrier’s low rate for eligible individuals became known, agents would likely refer unhealthy HIPAA eligibles to that carrier. We also found that these carriers typically evaluate the health status of applicants and offer healthy individuals access to their standard products. Although these products may include a preexisting condition exclusion period, they may cost considerably less than the HIPAA product and therefore are likely to draw healthy individuals away from HIPAA products. Unhealthy HIPAA-eligible individuals may have access to only the guaranteed access product, and some of them may be charged an even higher premium on the basis of their health status. Carriers permit or even encourage healthy HIPAA-eligible individuals to enroll in standard plans. According to one carrier official, denying these individuals the opportunity to enroll in a less expensive product for which they are eligible would be contrary to the consumers’ best interests. Moreover, the practice of encouraging healthy HIPAA-eligible individuals to enroll in standard products may lead to further rate increases for HIPAA guaranteed access products in the future. According to an official from one large insurance carrier, a spiral might ensue as higher premiums induce the better health risks to disenroll from HIPAA products, leaving a pool of poorer risks and spurring insurers to further raise premiums. Finally, HIPAA regulations explicitly impose a risk-spreading requirement under only one of the three options carriers have to provide coverage to HIPAA-eligible individuals. If carriers choose to develop two new products to be offered to eligible individuals, they must include some method of risk spreading or a financial subsidization mechanism. Under the other two options, the regulations are silent about rates. In fact, the preamble to the regulations expressly acknowledges that HIPAA does not place limits on the premiums insurers may charge. This, some state regulators contend, permits issuers to charge substantially higher rates for products with guaranteed access in the federal fallback states. HIPAA’s group-to-individual guaranteed access rights are limited to eligible individuals and are subject to several other restrictions. Consumers who do not understand these rights may be disappointed or even be at risk of losing their group-to-individual portability rights. Some consumers believe HIPAA provides broader access and protections than it actually does. After HIPAA was enacted, insurance regulators in several states received numerous calls from individuals, including the uninsured, who misunderstood their rights and expected to have guaranteed access to insurance coverage. One state reported receiving consumer calls at the rate of 120 to 150 a month beginning shortly before most HIPAA provisions became effective on July 1, 1997. About 90 percent of these calls related to the group-to-individual guaranteed access provision, about half of which were complaints about the lack of access to coverage in the individual market. Similarly, an official from one large national insurer told us that many consumers believe the law covers them when it actually does not. One insurance agent suggested that perhaps only 10 percent or fewer of all individuals actually know that HIPAA exists, much less fully understand the protections it offers. Some regulators and others contend that the press has poorly served the public by not accurately portraying the consumer protections provided under HIPAA. They believe that the media reporting of the rhetoric surrounding the passage of HIPAA may have contributed to misunderstanding among consumers. HIPAA imposes several restrictions on former group enrollees’ guarantee of access to individual market coverage. Among other restrictions, eligible individuals must have had at least 18 months of creditable coverage (the most recent of which must have been group coverage) with no break of more than 63 consecutive days; have exhausted any COBRA or other continuation coverage available; not be eligible for any other group coverage, or Medicare or Medicaid; and not have lost group coverage because of nonpayment of premiums or fraud. In addition to these restrictions, consumers need to be aware of other factors in order to exercise their rights. For example, in states that used the federal fallback approach, eligible individuals needed to be aware that the provision became effective on July 1, 1997, and that coverage must be offered by all carriers in the state that operate in the individual insurance market. In states that chose an alternative mechanism, eligible individuals needed to know that the provision had until January 1, 1998, to take effect and also needed to be aware of which method the state chose to provide guaranteed access to coverage in order to exercise their group-to-individual guaranteed access right. Consumer misunderstanding of these restrictions can hamper or limit access to products for eligible individuals. For example, individuals who are unaware of the 63-day limit on coverage interruptions may wait until medical care is necessary before applying for coverage, only to find that coverage is unavailable, according to one regulator. A regulator told us that individuals coming from group coverage have waited beyond 63 days to apply for individual coverage and thus have lost their portability rights. Another insurance regulator said that some consumers lost their guarantee to individual coverage because they left group coverage before January 1, 1998, believing HIPAA guaranteed access rights to be in place. However, because the state chose an alternative mechanism, protections did not exist until January 1, and insurance department officials in the state were in the unfortunate position of telling consumers that they had no guaranteed access rights. Some state regulators and consumer advocates support the need for more consumer education. HHS also recognizes that the lack of consumer education is a significant problem. A well-informed consumer is better able to take advantage of the protections HIPAA offers, according to the officials. The agency is more convinced than ever that education outreach and assistance are the keys to improving group-to-individual portability under HIPAA. However, because of resource constraints, the agency is unable to put much effort into consumer education. HHS officials told us the agency is attempting to expand the information available on a toll-free telephone number to include HIPAA particulars, is expanding its Web site to include more HIPAA information, and is in the very early stages of developing an education pilot program in two regions. Issuers of health coverage have several concerns about the unintended consequences of certain HIPAA requirements. An ongoing concern has been the administrative burden and cost associated with the requirement to issue certificates of creditable coverage to all enrollees who terminate coverage. While issuers generally have complied with this requirement, some suggest that a more limited requirement, such as issuing the certificates only to consumers who request them, would serve the same purpose for less cost. Issuers are also concerned that HIPAA’s guaranteed renewal requirement may have negative consequences for certain populations, including individuals eligible for Medicare. Finally, issuers are concerned that certain HIPAA provisions create opportunities for individuals to abuse protections afforded to group coverage enrollees. HIPAA requires issuers of health coverage to provide certificates of creditable coverage to enrollees whose coverage terminates. The certificates are intended to document an individual’s period of coverage so that a subsequent health issuer can credit this time against the preexisting condition exclusion period of the new coverage. Early indications suggest that issuers generally appear to be complying with this requirement. Moreover, none of the health carrier officials with whom we met were unable to issue the certificates once systems were put into place to generate them. Likewise, state insurance regulators we visited had received few complaints from consumers who were unable to obtain a certificate of coverage, and they therefore do not consider issuer compliance with the certification requirement a significant concern. Nevertheless, as we reported in our September 2, 1997, correspondence,concerns about HIPAA’s certification requirement remain: Some issuers suggest that information needed for certificates can be difficult to obtain. For example, certificates must include information on each dependent covered under the policy, such as the date they were first covered and how long the coverage was in effect. Since changes in the number or status of dependents in a family—as a result of events such as births, deaths, and marriages—are fairly common in a large group plan, issuers may have a difficult time keeping abreast of all these changes. They believe that maintaining and updating records could be time-consuming and expensive. To address such concerns, federal agencies provided issuers a transition period ending June 30, 1998, during which certain dependent information need not be included in certificates. Issuers are also provided additional time to issue a certificate when a dependent’s cessation of coverage is not known to the issuer. Some regulators have also raised concerns that the certification requirement will create an added administrative burden for state Medicaid agencies. Medicaid recipients tend to enroll and disenroll in the Medicaid program frequently as their income and employment status change. This volatility in enrollment will increase the volume of certificates issued by the Medicaid program. In addition, Medicaid agencies have had a difficult time maintaining accurate addresses for recipients and expect a large volume of certificates to be undeliverable, according to NAIC. In the preamble to the interim final regulations, federal agencies requested comments on how the certification process might be adapted to the special circumstances of Medicaid agencies and other entities. Finally, issuers contend that certificates may not be necessary to prove creditable coverage in all cases and that issuance on demand would serve the same purpose at a lower cost. In fact, the Blue Cross Blue Shield Association estimates that consumers ultimately will not use as many as 90 percent of all certificates issued to prove creditable coverage. For example, several issuers, as well as a state regulator, pointed out that portability reforms passed by most states have worked well without a similar certification requirement. Where proof of prior coverage was needed, issuers asked for documentation of prior coverage from the applicant and, if unavailable, simply called the prior issuer to confirm that coverage. Also, many group health policies do not contain clauses with preexisting condition exclusions and therefore do not need certificates from incoming enrollees. HIPAA regulations explicitly state the circumstances under which an individual’s health coverage may not be renewed or may be canceled, such as for nonpayment of premiums or fraud. Issuers are concerned that the omission of other circumstances, such as the attainment of Medicare eligibility age and ceasing to meet eligibility criteria for targeted population insurance programs, may affect both issuers and consumers adversely. Commonly cited as problematic is the renewal of comprehensive coverage for individual market enrollees who become eligible for Medicare. When individuals reach the age of Medicare eligibility, issuers have typically terminated individuals’ comprehensive coverage and offered Medicare supplemental coverage instead. HIPAA’s requirement to automatically renew this comprehensive coverage may have a number of drawbacks. First, individuals risk losing their 6-month open enrollment window for Medicare supplemental coverage. If individuals choose to retain comprehensive coverage rather than obtain Medicare supplemental coverage, they may permanently lose their right to enroll in a supplemental policy without preexisting condition exclusions in the future. This could have a significant impact on some consumers, since individual market coverage is often more expensive than Medicare supplemental coverage. In addition, many states do not permit issuers to coordinate their coverage with that provided by Medicare. Thus, some consumers may pay for duplicate coverage. Finally, NAIC is concerned that renewing coverage for Medicare eligibles could have a deleterious effect on the individual insurance market. Premiums for all individuals could increase if large numbers of older and less healthy individuals remain in that market. Because of these consequences, several state insurance regulators require issuers to notify enrollees of the implications of renewing their coverage once they become eligible for Medicare. HIPAA’s guaranteed renewal requirements may also preclude issuers from canceling the coverage of individuals enrolled in insurance programs targeted for low-income populations once these individuals exceed eligibility criteria. Since carriers might be prohibited from canceling coverage once an enrollee’s income exceeds the eligibility threshold, a program’s limited slots could be filled by otherwise ineligible individuals. Similarly, under children-only insurance products, issuers could be required to renew coverage for those who have reached adulthood. Several issuers and their representative organizations have expressed concern about such implications of the guaranteed renewal requirement and have asked the federal agencies to revise regulations to provide appropriate exceptions. Issuers cite two provisions in HIPAA that consumers could potentially abuse. First, HIPAA requires group health plans to give new enrollees or enrollees switching plans during an open enrollment period full credit for a broad range of prior health coverage, regardless of the deductible level of that coverage. Since the law does not recognize differences in the deductible levels, issuers and regulators are concerned that where given a choice of health coverage options, individuals may enroll in inexpensive, high-deductible plans that may have limited benefits while healthy and then switch to plans with comprehensive, first-dollar coverage when they become ill. Likewise, a small employer could move all its employees from a high- to a low-deductible plan once a single employee becomes ill. Second, issuers are concerned that certain enrollment rights under HIPAA create the opportunity for abuse. Under certain circumstances, HIPAA permits an individual who initially declines coverage under the employer’s group plan to later obtain coverage under the plan without waiting for the specified open enrollment period or being penalized as a late enrollee. The circumstances under which this special enrollment period is allowed include the loss of other health coverage as well as family changes that affect the status of dependents, such as marriage, birth, and adoption. Issuers suggest that since individuals essentially control some of the circumstances that create these special enrollment periods, some may forgo coverage until medical care is needed and then create the circumstances that trigger an open enrollment period. For example, an unmarried couple could avoid the expense of health coverage, knowing they could obtain access to their employers’ group coverage if necessary later by marrying. Citing a related example, a Health Insurance Association of America official noted that individuals could also misuse HIPAA’s prohibition against including pregnancy as a preexisting condition. For example, nothing would prevent an employee from avoiding the expense of health coverage until medical care for pregnancy became necessary. The employee need merely enroll as a late enrollee to immediately obtain full coverage for maternity benefits. State regulators have encountered difficulties implementing HIPAA provisions in instances where federal regulations lacked sufficient clarity or detail. Where federal regulations have been viewed as unclear, the resulting confusion has affected state regulators and issuers in carrying out their roles under HIPAA. Federal agency officials suggest that statutory deadlines, competing demands, and their desire to provide states the flexibility to implement the regulations in a manner best suited to each state may have contributed to the perceived lack of clarity. The unclear or ambiguous nature of some of HIPAA’s implementing regulations have presented several challenges to state regulators. Specifically, some regulators are concerned that the lack of clarity may result in varying interpretations and confusion among the multiple entities involved in implementation. For example, Colorado insurance regulators surveyed carriers in that state to determine how they interpreted regulations pertaining to group-to-individual guaranteed access. The survey results indicated that issuers had a difficult time interpreting the regulations and were applying the regulations differently. Such regulatory ambiguities can have critical consequences for consumers and have created some situations in which the intent of the statute may have been thwarted, according to NAIC. For example, as discussed earlier, partly because of the inconsistency in the risk-spreading requirement for products available to HIPAA-eligible individuals in the individual markets of federal fallback states, rates for these products in some states range from 140 to 600 percent of standard rates. As a result, many regulators believe this outcome raises a question about whether those leaving group coverage are provided with meaningful access under HIPAA to coverage in the individual insurance market. The following are examples of other regulatory provisions for which state insurance regulators have sought further federal guidance or clarification. Plan design as preexisting condition exclusion period. One of HIPAA’s key goals is to provide portability of coverage to those who change jobs or lose group coverage. To achieve this objective, the regulations limit the extent to which issuers can exclude preexisting conditions from coverage. However, the regulations do not contain guidance about whether an issuer may structure the benefits of a plan in a way that effectively excludes certain preexisting conditions. For example, according to NAIC, some health plans have established waiting periods of up to a year during which certain conditions or procedures, such as organ transplants, are excluded from all enrollees’ coverage. Requiring such waiting periods effectively excludes such preexisting conditions from coverage and, according to regulators, is contrary to the statutory intent to provide portability of coverage. Treatment of late enrollees. State regulators believe HIPAA is unclear about whether late enrollees are eligible for coverage. Although the regulations explicitly define “late enrollees” as individuals who enroll for group coverage any time after the date on which they were initially eligible (or subsequently eligible under a special enrollment period), the preamble to the regulations indicates that issuers are not required to accept late enrollees. Regulators believe that certain distinctions, such as an 18-month preexisting waiting period for late enrollees versus 12 months for on-time enrollees, would not have been made if late enrollees were not intended to be covered. Accordingly, NAIC has asked that HHS interpret the statute to explicitly require the acceptance of late enrollees. Market withdrawal as exception to guaranteed renewability. Regulators believe that the HIPAA provision that allows issuers who cease offering coverage throughout the individual and group markets to not renew the coverage of an individual or a group creates uncertainties that may affect their ability to regulate insurance. Regulators believe the interim regulations leave three key questions unanswered. First, must an issuer who withdraws from the market also not renew existing coverage, or does it have the discretion to maintain existing coverage but not write new coverage? Second, must the issuer also cease to issue all other types of health policies, such as limited-benefit or specified-disease policies? And finally, must the issuer terminate all coverage at once, or can it terminate each policy on its respective anniversary date? Nondiscrimination provisions in group plans. HIPAA regulations prohibit group plan issuers from excluding an individual of the group from coverage or charging a higher premium because of an individual’s health status or medical history. In the preamble to the nondiscrimination regulations, federal agencies sought input on this requirement from regulators and issuers and indicated that further guidance would be forthcoming. Until further guidance is issued, regulators have several questions concerning how this requirement is applied, such as to what extent the statute permits an issuer to limit benefits on the basis of the source of a person’s injury and whether issuers may vary benefits for different groups of employees. Federal agency officials point to several factors that contributed to the perceived lack of clarity or sufficient detail in some HIPAA regulations. First, the agencies were required to issue a number of complex regulations within a relatively short period of time. The statute, signed into law on August 21, 1996, required that implementing regulations be issued within fewer than 8 months, on April 1, 1997. Implicitly recognizing this challenge, the Congress provided for the issuance of regulations on an interim final basis. This time-saving measure helped the agencies to issue a large volume of complex regulations within the statutory deadline, while also providing the opportunity to add more details or further clarify the regulations based on comments later received from industry and states. Therefore, some regulatory details necessarily had to be deferred until a later date. Furthermore, agency officials point out that in developing the regulations, they sought to balance states’ need for clear and explicit regulations with the flexibility to meet HIPAA goals in a manner best suited to each state. For example, under group-to-individual guaranteed access requirements, states were given several options for achieving compliance. While the multiple options may have contributed to confusion in some instances, the controversial nature of the requirement suggested to agency officials that a flexible approach was in the best interests of states. Officials said that many state officials requested that minimal detail be included in the federal regulations. In particular, with respect to risk spreading for guaranteed access products in the individual market, HHS officials said they attempted to meet with federal fallback states to discuss appropriate regulations. However, the states were hesitant to participate in such meetings until after the July 1, 1997, effective date passed and they were confronted with greater than expected operational problems. Officials further noted that HIPAA does not preclude states adopting their own risk-spreading requirements. Finally, some of the regulatory ambiguities derive from ambiguities existing in the statute itself. For example, regulations concerning late enrollees closely track the language from the statute. To ease the burden on state regulators and issuers, HIPAA regulations provided an overall good faith compliance period, which ended on January 1, 1998. Until that time, federal officials agreed to take no compliance action against any issuer who attempted to comply with HIPAA. In addition, a good faith compliance period continues to apply to the nondiscrimination provisions until further guidance is issued, and additional leeway is given in the form of phase-ins for certain other provisions. States have the option of enforcing HIPAA’s access, portability, and renewability standards as they apply to fully insured group and individual health coverage. In states that do not pass laws to substantially enforce these federal standards, HHS must perform the enforcement function. According to HHS officials, the agency as well as the Congress and others assumed HHS would generally not have to perform this role, believing instead that states would not relinquish regulatory authority to the federal government. However, several states reported that they did not pass legislation implementing key provisions of HIPAA, thus requiring HHS to actively regulate insurance plans in these states. Preliminary information suggests that a number of additional states may not enact one or more HIPAA provisions, potentially requiring HHS to also play a limited regulatory role in these states. HHS resources are currently strained by its new regulatory role in the five states where enforcement is under way, according to officials, and concern exists about the implications of the possible expansion of this role to additional states. Unlike Labor and the Treasury, HHS was given a new regulatory role under HIPAA. The agency must enforce HIPAA provisions for fully insured group and individual market plans in states that do not enact the standards in state laws and substantially enforce them. In these states, HHS must take on functions typically reserved for state insurance regulators. The agency must provide guidance to help issuers in modifying their products and practices to comply with HIPAA requirements, obtain and review issuers’ product literature and policy forms, monitor issuer marketing practices, respond to consumer complaints and encourage issuers to take corrective actions where noncompliance is determined, and impose civil monetary penalties on issuers who fail to initiate corrective actions. Although the role of an insurance regulator represents a significant new responsibility for HHS, neither the Congress nor HHS anticipated the agency would actually be required to perform this role to any great extent. Many federal authorities assumed that the vast majority of states would choose to pass laws to enforce HIPAA provisions rather than relinquish regulatory authority to the federal government. As of December 1997, HHS was preparing to enforce HIPAA standards in five states that reported federal enforcement would be necessary. These five states—California, Massachusetts, Michigan, Missouri, and Rhode Island—did not pass laws to implement the group-to-individual guaranteed access provision, among others, according to an NAIC survey and HHS officials. HHS has also been working with insurance regulators from U.S. territories to determine whether federal enforcement is necessary there. HHS will next turn its attention to the remaining states. According to agency officials, because states were not required to report their plans for enforcing most HIPAA standards, HHS has had to rely on information provided voluntarily by states, surveys performed by others, and anecdotal reports to determine the status of state legislative activity. Resources permitting, HHS may survey each state during 1998 and make a comprehensive determination of the status of HIPAA legislation and enforcement. Nevertheless, preliminary data from an October 1997 NAIC survey indicate that while most states have made progress in enacting statutes implementing key HIPAA provisions, many gaps remain. For example, as indicated in table 2, in the individual market, eight states had not passed laws to implement guaranteed renewal. In the group markets, two states had not passed laws to implement small-group guaranteed access, and four states had not passed laws to implement guaranteed renewal and limits on preexisting condition exclusion periods in the large-group markets. In addition, these preliminary data do not include HIPAA’s certificate issuance requirement, and anecdotal evidence suggests that many states have not incorporated this requirement into state statutes. While states continue to pass legislation to close some of these gaps, the possibility remains that not all provisions in all market segments will be addressed, necessitating an expansion of HHS’ enforcement role. The new enforcement role HHS is required to perform in California, Massachusetts, Michigan, Missouri, and Rhode Island may strain the resources of its regional offices serving those states, according to HHS officials. For example, HHS staff in the Kansas City regional office (covering Missouri) are challenged to regulate the insurance products offered by up to 500 insurers in Missouri. To carry out this function, the office asked for 11 new full-time positions but, as of December 1997, was authorized to hire only 4. Three of the four positions have been filled through outside hires, and one was filled through an internal promotion. Two additional staff were rotated from other units to assist in HIPAA-related activities. Even fewer resources are devoted to HIPAA enforcement in the two other regions, Boston and San Francisco. Also as of December 1997, Boston had only one full-time and two part-time staff members devoted to enforcing the HIPAA compliance of hundreds of Massachusetts and Rhode Island insurers. Although the office had received authorization for two additional staff, none had yet been hired. A health insurance specialist in that office said that with such limited staffing, the office will be hard-pressed to fulfill its upcoming policy form review tasks and handle the expected surge in consumer queries in early 1998. In San Francisco, no additional staff had yet been authorized, and only one person was working full time on HIPAA issues as of December 1997. HHS was surprised by California’s failure to pass group-to-individual guaranteed access, a fact that did not become known until September 1997. According to an HHS deputy director, regulation in California will be especially challenging because of the state’s large size and the fragmented, complicated structure of its health insurance markets. HHS’ resources will be further strained if the enforcement role it is serving in these five states becomes permanent or expands to other states. If HHS determines that other states have not passed one or more HIPAA provisions, as preliminary data suggest, HHS will have to play a regulatory role in these additional states. Staff throughout the agency noted that HHS’ current resources are insufficient to handle such a task. Officials outside HHS have also publicly expressed concern that its resources could become overtaxed. For example, in his September 1997 testimony before the House Ways and Means Committee’s Subcommittee on Health, the president of the Health Insurance Association of America testified that HHS faces “regulatory overload” because of the demands placed on the agency by HIPAA and other new responsibilities under the Balanced Budget Act of 1997. Also, in an October 1997 speech, the former administrator of HHS’ Health Care Financing Administration said that the agency is facing a serious problem if it does not receive additional resources to cope with its expanded responsibilities under HIPAA and other recent laws. Federal officials have begun to respond to some of the concerns raised during the first year of HIPAA implementation. HHS is continuing to monitor the need for more explicit risk-spreading requirements to mitigate the high cost of guaranteed access products in the individual market under the federal fallback approach. Though HHS does not at present support changes to the certificate issuance requirement, some of the other unintended consequences and concerns that issuers and states cite may be addressed by ongoing revisions to and clarifications of the regulations. Federal agencies issued further guidance at the end of 1997 and expect to continue issuing guidance in 1998. Finally, because of the increasing pressure on its resources, HHS has asked for additional funding as part of its fiscal year 1999 budget request. HHS has realized that many HIPAA-eligible individuals in states using the federal fallback approach to group-to-individual guaranteed access may be unable to obtain affordable coverage and may effectively be priced out of the market. According to officials, HHS legal staff are reevaluating whether HIPAA provides the agency authority to issue regulations with more explicit risk-spreading requirements and the agency is continuing to monitor the situation. HHS officials believe it is premature to revise the certificate issuance requirement in response to issuer concerns that issuing certificates creates an administrative burden and is unnecessary to prove creditable coverage. The officials indicated that certificates do serve another important purpose in that they notify consumers of their portability rights, regardless of whether the consumers ultimately need to use the certificate to exercise those rights. In addition, HHS officials have heard anecdotal evidence that suggests even with the certificate some consumers are having difficulty exercising their portability rights. With respect to state Medicaid agencies, officials acknowledged that they may face an increased administrative burden, but HHS and other federal agency officials were concerned that offering an exception to Medicaid agencies might encourage other groups to also seek an exception. Federal agencies interpret HIPAA’s guaranteed renewal provision to mean that individuals, upon becoming eligible for Medicare, must be given the option of maintaining their individual market coverage. HHS officials point out that some retirees with special needs, such as those dependent on expensive prescription drugs, may benefit from retaining their individual market coverage rather than buying a Medicare supplemental policy. Moreover, they disagree with the insurance industry and state regulators’ contention that sufficient numbers of individuals in poor health will remain in the individual market to affect premium prices there. Finally, even if HHS supported a change to this requirement, agency legal staff are uncertain whether HHS could simply change the regulations or whether a technical amendment to the statute would be needed. With respect to insurance products offered to targeted populations, such as children or low-income families, HHS has no immediate plans to revise HIPAA requirements. However, officials say they are considering industry comments on this issue and would not rule out the possibility in the future. Federal officials have also acknowledged concern that certain other HIPAA provisions, such as those that give group enrollees who switch health plans full credit for a broad range of prior coverage, may create an incentive for consumers to abuse the provision. Furthermore, they acknowledged that such abuse may lead to adverse selection. In response, the federal agencies have asked for comments from issuers and regulators about how differences between high- and low-deductible plans should be treated under HIPAA. The agencies have received many comments on the issue and are continuing to examine potential changes. The agencies also issued supplemental guidance for provisions concerning nondiscrimination and late enrollment on December 29, 1997. This guidance clarifies how group health plans must treat individuals who, prior to HIPAA, had been excluded from coverage because of a health status-related factor. Further guidance and clarification in these and other areas will follow. To address its resource constraints, HHS has shifted resources to HIPAA tasks from other activities. In its fiscal year 1999 budget request, HHS has also requested an additional $15.5 million to fund 65 new full-time-equivalent staff and outside contractor support for HIPAA-related enforcement activities. Its most critical unmet need, according to agency officials, relates to the direct federal enforcement of HIPAA insurance standards in the states. Officials further noted that, even if the requested funding becomes available, it may not be adequate if direct HHS enforcement becomes necessary in additional states. HIPAA provides, for the first time, nationwide minimum standards for health coverage access, portability, and renewability in all private insurance markets. Importantly, these new standards apply to both fully insured and self-funded coverage. However, implementation of the standards is complicated. It requires three federal agencies, state legislatures and insurance regulators, and issuers of health coverage to coordinate their efforts. Further complicating implementation, the issuance of federal regulations has been on an interim final basis. Moreover, different HIPAA provisions have become effective and group plans have become subject to the law on different dates. Nevertheless, implementation has moved forward. For example, federal agencies issued interim final regulations within the deadline set by HIPAA, using a process widely commended for being open and inclusive. As might be expected, however, the process has raised certain concerns and posed challenges to those charged with implementing this new law. Some challenges are likely to recede or be addressed in the near term. What could be called “early implementation hurdles,” especially those related to the clarity of federal regulations, may be resolved during 1998. Federal agencies issued supplemental guidance on December 29, 1997, and expect to provide further regulatory guidance during 1998 to states and issuers, who consider certain regulations—relating to nondiscrimination, late enrollment, and special enrollment periods—to be ambiguous. Moreover, as states and issuers gain experience in implementing HIPAA standards, the intensity of their dissatisfaction may diminish. For example, while still criticizing the cost and administrative burden of issuing certificates of creditable coverage, issuers seem able to comply. (Now that the start-up burden of putting procedures in place is largely behind them, issuers we visited seemed to find the day-to-day process of issuing these certificates to be manageable.) Various participants involved in implementing HIPAA have pointed to several potential unintended consequences, but whether these possibilities will be realized is difficult to predict. These concerns are necessarily speculative in nature because HIPAA’s insurance standards have not been in effect long enough for evidence on these potential problems to accumulate. First, for example, evidence is not yet available to determine whether large numbers of Medicare eligibles will remain in the individual market for health insurance (and consequently push up premiums there). The same is true for whether good health risks will select high-deductible plans, leaving the sicker individuals in low-deductible plans, or whether consumers will abuse special enrollment periods to obtain coverage. Second, possible changes in the regulations or the HIPAA statute may further affect whether a concern becomes a reality. However, uncertainty over whether the changes will be made or will rectify the potential unintended consequences makes more difficult any assessment of these possibilities. Finally, two implementation difficulties are substantive and likely to persist unless measures are taken to address them. First, among the 13 federal fallback states, some consumers are finding it difficult as a result of high premiums to obtain the group-to-individual guaranteed access coverage that HIPAA requires. This situation is likely to continue unless HHS interprets HIPAA to provide for more explicit risk-spreading requirements or states adopt explicit risk-spreading requirements of guaranteed access to coverage for HIPAA eligibles. In addition, if consumer education about HIPAA coverage guarantees in the individual market continues to be spotty or absent, consumers will likely continue to be discouraged by the limited nature of HIPAA protections. Similarly, some will probably continue to be at risk of losing those protections. Second, HHS’ regulatory role could expand as the status of state efforts to adopt and implement HIPAA provisions becomes clearer in 1998. HHS’ current enforcement capabilities could be inadequate to handle the additional burden unless further resources become available. As additional health plans become subject to the law, and as the remaining regulations and guidance are issued, new problems of implementation may emerge. Corrective actions will necessarily be ongoing. A comprehensive determination of HIPAA’s impact remains years off. The Departments of Health and Human Services, Labor, and the Treasury commented on a draft of this report. In general, the agencies believed that our report did not adequately describe the obstacles they faced in issuing interim final HIPAA regulations within the statutory deadline. Labor added that our draft did not adequately discuss consumers’ views, distinguish the individual market from the group market regarding implementation challenges, identify all of Labor’s outreach efforts, or convey the extent to which its expanded regulatory role under HIPAA will place new demands on agency resources. Treasury generally concurred with the HHS and Labor comments. In light of these comments, we have refined our presentation in several places as appropriate. Appendixes III, IV, and V contain the agencies’ letters and for HHS and Labor, our responses. We also furnished a draft of this report for review to the American Association of Health Plans, Blue Cross Blue Shield Association, Consumers Union, ERISA Industry Committee, Health Insurance Association of America, and NAIC. We received comments from all but the ERISA Industry Committee. In response, we clarified certain distinctions and made technical changes as appropriate. As agreed with your office, unless you publicly release its contents earlier, we will make no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretaries of Health and Human Services, Labor, and the Treasury and will make copies available to others on request. Please contact me at (202) 512-7114 or Jonathan Ratner, Senior Health Economist, at (202) 512-7107 if you or your staff have any further questions. Other GAO contacts and staff acknowledgments for this report are listed in appendix VI. To achieve its goals of improving the access, portability, and renewability of private health insurance, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) sets forth standards that variously apply to the individual, small-group, and large-group markets of all states. Most HIPAA standards became effective on July 1, 1997. However, the certificate-issuance standard became effective on June 1, 1997, and issuers had to provide certificates automatically to all disenrollees from that point forward as well as upon request to all disenrollees retroactive to July 1, 1996. In states that chose an alternative mechanism approach, the guaranteed access standard in the individual market (often called “group-to-individual portability”) was to become effective no later than January 1, 1998. Finally, group plans do not become subject to the applicable standards until their first plan year beginning on or after July 1, 1997. Each of HIPAA’s health coverage access, portability, and renewability standards is summarized in table I.1 by applicable market segment. The subsequent text describes each standard. Small group (2-50 employees) Limitations on preexisting condition exclusion periodsCredit for prior coverage (portability) N/A = not applicable. HIPAA requires issuers of health coverage to provide certificates of creditable coverage to enrollees whose coverage terminates. The certificates must document the period during which the enrollee was covered so that a subsequent health issuer can credit this time against its preexisting condition exclusion period. The certificates must also document any period during which the enrollee applied for coverage but was waiting for coverage to take effect—the waiting period—and must include information on an enrollee’s dependents covered under the plan. In the small group market, carriers must make all plans available and issue coverage to any small employer that applies, regardless of the group’s claims history or health status. Under individual market guaranteed access—often referred to as group-to-individual portability—eligible individuals must have guaranteed access to at least two different coverage options. Generally, eligible individuals are defined as those with at least 18 months of prior group coverage who meet several additional requirements. Depending on the option states choose to implement this requirement, coverage may be provided by carriers or under state high-risk insurance pool programs, among others. HIPAA requires that all health plan policies be renewed regardless of health status or claims experience of plan participants, with limited exceptions. Exceptions include cases of fraud, failure to pay premiums, enrollee movement out of a plan service area, the cessation of membership in an association’s health plan, and the withdrawal of an issuer from the market. Group plan issuers may deny, exclude, or limit an enrollee’s benefits arising from a preexisting condition for no more than 12 months following the effective date of coverage. A preexisting condition is defined as a condition for which medical advice, diagnosis, care, or treatment was received or recommended during the 6 months preceding the date of coverage or the first day of the waiting period for coverage. Pregnancy may not be considered a preexisting condition, nor can preexisting conditions be imposed on newborn or adopted children, in most cases. Group plan issuers may not exclude a member within the group from coverage on the basis of the individual’s health status or medical history. Similarly, the benefits provided, premiums charged, and employer contributions made to the plan may not vary within similarly situated groups of employees on the basis of health status or medical history. Issuers of group coverage must credit an enrollee’s period of prior coverage against its preexisting condition exclusion period. Prior coverage must have been consecutive, with no breaks of more than 63 days to be creditable. For example, an individual who was covered for 6 months who changes employers may be eligible to have the subsequent employer plan’s 12-month waiting period for preexisting conditions reduced by 6 months. Time spent in a prior health plan’s waiting period cannot count as part of a break in coverage. Individuals who do not enroll in a group plan during their initial enrollment opportunity may be eligible for a special enrollment period later if they originally declined to enroll because they had other coverage, such as coverage under COBRA, or were covered as a dependent under a spouse’s coverage and later lost that coverage. In addition, if an enrollee has a new dependent as a result of a birth or adoption or through marriage, the enrollee and dependents may become eligible for coverage during a special enrollment period. HIPAA also includes certain other standards that relate to private health coverage, including limited expansion of COBRA coverage rights, new disclosure requirements for Employee Retirement Income Security Act of 1974 (ERISA) plans, and, to be phased in through 1999, new uniform claims and enrollee data reporting requirements. Changes to certain tax laws authorize federally tax-advantaged medical savings accounts for small employer and self-employed plans. Finally, although not included as part of HIPAA but closely related are new standards for mental health and maternity coverage, which became effective on January 1, 1998. Under HIPAA, states may choose to guarantee access to individual market coverage for eligible individuals using either the “federal fallback” or state “alternative mechanism” approach. Federal fallback approach: Carriers must offer eligible individuals guaranteed access to coverage in one of three ways. Under this approach, HIPAA specifies that a carrier must offer either (1) all of its individual market plans, (2) only its two most popular plans, or (3) two representative plans—a lower-level and a higher-level coverage option—that are subject to some risk spreading or financial subsidization mechanism. Thirteen states are using the federal approach. State alternative mechanism: States may design their own approach to guarantee coverage to eligible individuals as long as certain minimum requirements are met. Essentially, the approach chosen must ensure that eligible individuals have guaranteed access to coverage with a choice of at least two different coverage options. Twenty-two of the 36 states and the District of Columbia that chose an alternative mechanism are using a high-risk insurance pool to provide group-to-individual guaranteed access rights. Table II.1 shows which states chose which approach. (continued) (Table notes on next page) Because state legislature was not in session during 1997, HIPAA allows Kentucky until July 1, 1998, to comply. The following are GAO’s comments on the Department of Health and Human Services’ letter dated February 9, 1998. 1. HHS commented that we did not adequately convey the many challenges it faced in issuing interim final regulations by the April 1, 1997, statutory deadline, and did not give sufficient credit to its accomplishment in doing so. Our original draft noted the federal agencies’ achievements (issuing interim final regulations by the statutory deadline and being widely commended for their open and inclusive process) as well as the obstacles the agencies faced (the complexity of the law, the difficulty of balancing the need for detail in the regulations with states’ desire for latitude in implementing them, and tight statutory deadlines). Nonetheless, we have refined our presentation, especially regarding these obstacles. The report elaborates on the nature of interim final rules and notes that HIPAA authorized their use. The report also now emphasizes that clarity and detail in the regulations are the more fundamental issues. For example, nondiscrimination rules were issued on time, but many of the necessary details states need to implement the rules have not yet been issued. We recognize the agencies’ achievement in issuing the majority of the interim final regulations by the statutory deadline, but also underscore the work that remains to be done. 2. HHS noted that supplemental HIPAA guidance was issued on December 29, 1997. This development is now incorporated in our report. The following are GAO’s comments on the Department of Labor’s letter dated February 3, 1998. 1. Labor believed we should have included in our report the perspective of consumer groups and individual citizens to provide a better balance of the benefits and limitations of HIPAA. We disagree with this point for two reasons. First, our report does reflect consumer perspectives. In our fieldwork, we interviewed officials from certain national and local consumer organizations, such as Consumers Union and the Missouri Consumer Health Care Watch Coalition. Their members’ very limited awareness of and experience with this new law tended to corroborate our findings concerning challenges in the individual market. Second, a comprehensive assessment of HIPAA’s benefits and limitations lies outside our scope. Our study aimed at monitoring the actual process of implementing HIPAA, not at systematically evaluating its effects or assessing its merits from a consumer’s perspective. Consequently, we focused on the activities of those implementing the law—state and federal regulators and issuers—and emphasized areas where preliminary evidence signaled emerging challenges. 2. Labor stated that our report does not describe adequately its industry and consumer outreach efforts. On the contrary, we believe the examples of Labor outreach efforts that we cite do recognize these efforts adequately. We did not provide a fuller list of Labor’s efforts because our conclusion concerning the lack of consumer education bears only on the individual insurance market, where Labor has no jurisdiction. However, we have clarified that the consumer education conclusion applies to the individual—not group—insurance market. 3. Labor commented that we did not adequately convey the many challenges it faced in issuing interim final regulations by the April 1, 1997, statutory deadline, and did not give sufficient credit to its accomplishment in doing so. Our original draft noted the federal agencies’ achievements (issuing interim final regulations by the statutory deadline and being widely commended for their open and inclusive process) as well as the obstacles the agencies faced (the complexity of the law, the difficulty of balancing the need for detail in the regulations with states’ desire for latitude in implementing them, and tight statutory deadlines). Nonetheless, we have refined our presentation, especially regarding these obstacles. The report elaborates on the nature of interim final rules and notes that HIPAA authorized their use. The report also now emphasizes that clarity and detail in the regulations are the more fundamental issues. For example, nondiscrimination rules were issued on time, but many of the necessary details states need to implement the rules have not yet been issued. We recognize the agencies’ achievement in issuing the majority of the interim final regulations by the statutory deadline, but also underscore the work that remains to be done. 4. Labor commented that the draft report inappropriately commingles our analyses of group and individual HIPAA standards and does not recognize the relatively favorable responses it has received regarding the group market reforms. We clarified the distinction in our report between the challenges arising in the individual markets of some states and those in the employer-sponsored group markets. We devoted our resources to gathering information where preliminary evidence pointed to emerging challenges rather than where they were less apparent, resulting in a less extensive review of HIPAA implementation in the group market. 5. Labor stated that the draft report failed to mention the issuance of supplemental HIPAA guidance (concerning late enrollees and nondiscrimination provisions) on December 29, 1997. We have incorporated the new information the agencies have provided in their comments. (In early December 1997, HHS officials had estimated that it would not be issued before “early 1998.”) However, since the new guidance does not address the particular aspects of the late enrollment and nondiscrimination requirements that we cite as lacking clarity, the examples remain. 6. Labor commented that the draft report suggested its enforcement responsibilities are limited to self-funded group plans and did not note that the agency, like HHS, also faces expanded enforcement responsibilities. However, as we pointed out in the report under HIPAA, only HHS faces an entirely new enforcement role—one that has become larger than anticipated. We also observed that, because of HIPAA, Labor faces an extension of its existing enforcement role under ERISA. Nonetheless, while this creates extra demands on Labor’s resources, in the near term, the demands facing HHS in its new enforcement role appear to be more urgent. Regarding enforcement responsibilities, the report now refers to all, not just self-funded, group plans. The study team consisted of Randy DiRosa, who managed the project, and Betty Kirksey, Evaluator. Susan Thillman advised on report presentation, Craig Winslow provided legal review, and Elizabeth T. Morrison provided editorial review. This report was prepared initially under the direction of the late Michael Gutowski; his role was later assumed by Jonathan Ratner. Medical Savings Accounts: Findings From Insurer Survey (GAO/HEHS-98-57, Dec. 19, 1997). The Health Insurance Portability and Accountability Act of 1996: Early Implementation Concerns (GAO/HEHS-97-200R, Sept. 2, 1997). Private Health Insurance: Continued Erosion of Coverage Linked to Cost Pressures (GAO/HEHS-97-122, July 24, 1997). Employment-Based Health Insurance: Costs Increase and Family Coverage Decreases (GAO/HEHS-97-35, Feb. 24, 1997). Private Health Insurance: Millions Relying on Individual Market Face Cost and Coverage Tradeoffs (GAO/HEHS-97-8, Nov. 25, 1996). Health Insurance Regulation: Varying State Requirements Affect Cost of Insurance (GAO/HEHS-96-161, Aug. 19, 1996). Health Insurance for Children: Private Insurance Coverage Continues to Deteriorate (GAO/HEHS-96-129, June 17, 1996). Health Insurance Portability: Reform Could Ensure Continued Coverage for Up to 25 Million Americans (GAO/HEHS-95-257, Sept. 19, 1995). Health Insurance Regulation: National Portability Standards Would Facilitate Changing Health Plans (GAO/HEHS-95-205, July 18, 1995). The Employee Retirement Income Security Act of 1974: Issues, Trends, and Challenges for Employer-Sponsored Health Plans (GAO/HEHS-95-167, June 21, 1995). Health Insurance Regulation: Variation in Recent State Small Employer Health Insurance Reforms (GAO/HEHS-95-161FS, June 12, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the implementation of the Health Insurance Portability and Accountability Act (HIPAA), focusing on issues affecting: (1) consumers; (2) issuers of health coverage, including employers and insurance carriers; (3) state insurance regulators; and (4) federal regulators. GAO also reviewed efforts undertaken by federal agencies to address some of the concerns and challenges that have arisen. GAO noted that: (1) although HIPAA provides people losing group coverage the right to guaranteed access to coverage in the individual market regardless of health status, consumers attempting to exercise their right have been hindered by carrier practices and pricing and by their own misunderstanding of this complex law; (2) among the 13 states where this provision first took effect, many consumers who had lost group coverage experienced difficulty obtaining individual market coverage with guaranteed access rights, or they paid significantly higher rates for coverage; (3) some carriers have discouraged individuals from applying for the coverage or charged them rates 140 to 600 percent of the standard premium; (4) carriers charge higher rates because they believe individuals who attempt to exercise HIPAA's individual market access guarantee will, on average, be in poorer health than others in the individual market; (5) many consumers do not realize that the access guarantee applies only to those leaving group coverage who meet other eligibility criteria; (6) individuals must have previously had at least 18 months of coverage, exhausted any residual employer coverage available, and applied for individual coverage within 63 days of group coverage termination; (7) consumers who misunderstand these restrictions are at risk of losing their right to coverage; (8) issuers of health coverage believe certain HIPAA regulatory provisions result in: (a) an excessive administrative burden; (b) unanticipated consequences; and (c) the potential for consumer abuse; (9) although issuers appear to be generally complying with the requirement to provide a certificate of coverage to all individuals terminating coverage, some issuers continue to suggest that the process is burdensome and costly and that many of these certificates may not be needed; (10) these issuers, as well as many state regulators, believe that issuing the certificates only to consumers who request them would serve the purpose of the law for less cost; (11) state insurance regulators have encountered difficulties in their attempts to implement and enforce HIPAA provisions where they found federal guidance to lack sufficient clarity or detail; (12) federal regulators face an unexpectedly large regulatory role under HIPAA that could strain the Department of Health and Human Services' resources and impair its oversight and effectiveness; and (13) partly in response to health insurance issuers' and state regulators' concerns, federal agencies issued further regulatory guidance intended to clarify current HIPAA regulations.
Within CBP, the SBI Program Executive Office, referred to in this report as the SBI program office, has overall responsibility for overseeing all SBI activities for acquisition and implementation, including establishing and meeting program goals, objectives, and schedules for overseeing contractor performance; and for coordinating among DHS agencies. However, the tactical infrastructure portion of the program is managed on a day-to-day basis by CBP’s Office of Finance Facilities Management and Engineering division. Among other things, CBP’s Border Patrol has responsibility for detecting and preventing the illegal entry of aliens into the United States between designated ports of entry. DHS began funding the SBI program in fiscal year 2005 at a level of $38 million, which it increased to $325 million in fiscal year 2006. Starting in fiscal year 2007, DHS’s annual appropriations acts have included specific SBI appropriations. Since fiscal year 2005, SBI’s funding has amounted to over $3.7 billion (see table 1). DHS has requested $779 million in SBI funding for fiscal year 2010. The primary focus of the SBI program is on the southwest border areas (see fig. 1) between the ports of entrythat CBP has designated as having the greatest need for enhanced border security because of serious vulnerabilities. Although some tactical infrastructure exists in all the southwest border sectors, most of what has been built through the SBI program is located in the San Diego, Yuma, Tucson, El Paso, and Rio Grande Valley sectors. SBInet technology is to be initially deployed in the Tucson sector. SBInet is the program for acquiring, developing, integrating, and deploying an appropriate mix of surveillance technologies and command, control, communications, and intelligence (C3I) technologies. SBInet surveillance technologies are to include sensors, cameras, and radars. Additional technologies, such as aerial assets (e.g., helicopters and unmanned aerial surveillance aircraft) and Mobile Surveillance Systems (MSS) may be added in the future, but as of August 2009, whether and to what extent the additional technologies would be included in the configuration of the long- term SBInet systems solution had not been determined, according to SBI officials. The C3I technologies are to include software and hardware to produce a Common Operating Picture (COP)—a uniform presentation of activities within specific areas along the border. The sensors, radars, and cameras are to gather information along the border and the system is to transmit this information to the COP terminals located in command centers to provide CBP agents with border situational awareness. The COP technology is to allow agents to (1) view data from radars and sensors that detect and track movement in the border areas, (2) control cameras to help identify and classify illegal entries, (3) correlate entries with the positions of nearby agents, and (4) enhance tactical decision making regarding the appropriate response to apprehend an entry, if necessary. In September 2006, CBP awarded a prime contract for SBInet development to the Boeing Company for 3 years, with three additional 1-year options. As of July 2009, CBP was in the process of completing action to extend its contract with Boeing for the first option year. As the prime contractor, Boeing is responsible for acquiring, deploying, and sustaining selected SBI technology, deploying selected tactical infrastructure projects, and providing supply chain management for some tactical infrastructure projects. In this way, Boeing has extensive involvement in the SBI specifications development, design, production, integration, testing, and maintenance and support of SBI projects. Moreover, Boeing is responsible for selecting and managing a team of subcontractors that provide individual components for Boeing to integrate into the SBInet system. The SBInet contract is largely performance-based—that is, CBP has set requirements for the project and Boeing and CBP coordinate and collaborate to develop solutions to meet these requirements—and is designed to maximize the use of commercial off-the-shelf technology. CBP’s SBI program office oversees and manages the Boeing-led SBI contractor team. CBP is completing its part of its SBI activities through a series of task orders to Boeing for individual projects. As of July 8, 2009 CBP had awarded 13 task orders to Boeing for a total amount of approximately $1.1 billion. See appendix II for a summary of the task orders awarded to Boeing for SBI projects. The first SBInet deployment task order was for an effort known as Project 28. The scope of Project 28, as described in an October 2006 task order to Boeing, was to provide a system with the capabilities required to control 28 miles of border in Arizona. Project 28 was accepted by the government for deployment in February 2008—8 months behind schedule. This delay occurred because the contractor-delivered system did not perform as intended. For example, Boeing was unable to integrate components, such as towers, cameras, and radars with the COP software. Project 28 is currently operating along 28 miles of the southwest border in the Tucson sector of Arizona (see fig. 2). Future SBInet capabilities are to be deployed in “blocks.” For example, Block 1 is described as the first phase of an effort to design, develop, integrate, test, and deploy a technology system of hardware, software, and communications. Each block is to include a release or version of the COP. According to the Fiscal Year 2009 SBI Expenditure Plan, Block 1 is to include the Tucson and Yuma sectors and Block 2 is to include the sectors of El Paso, Rio Grande Valley, Laredo, Del Rio, San Diego, and El Centro. While the SBI program office is responsible for deploying SBInet technology, the tactical infrastructure program office, which was realigned to the Office of Finance, Facilities Management and Engineering in March 2009, is responsible for deploying tactical infrastructure—pedestrian and vehicular fencing, roads, and lighting—along the southwest border to deter smugglers and aliens attempting illegal entry. The Secure Fence Act of 2006, as amended, required DHS to construct not less than 700 miles of reinforced fencing along the southwest border where fencing would be most practical and effective, and to provide for the installation of additional physical barriers, roads, lighting, cameras, and sensors to gain operational control of the southwest border. Although the act did not impose any statutory deadlines with respect to the deployment of SBInet technology, it did require DHS to complete a portion of the required 700 miles of reinforced fencing by December 31, 2008. This interim construction deadline applied to 370 of the required 700 miles of reinforced fencing, to be located wherever the Secretary determined it would be most practical and effective in deterring smugglers and aliens attempting illegal entry. The Secure Fence Act of 2006, as amended, provided the Secretary of Homeland Security with some discretion regarding its mileage requirements. Notwithstanding the total mileage requirement of 700 miles, the act stated that the Secretary was not required to install fencing, physical barriers, roads, lighting, cameras, and sensors in a particular location “if the Secretary determines that the use or placement of such resources is not the most appropriate means to achieve and maintain operational control over the international border at such location.” According to DHS, under this authority, the Secretary determined that fencing was the most appropriate means to achieve and maintain operational control over 670 miles, rather than 700 miles, of the border. Furthermore, the act also gave the Secretary discretion, through December 31, 2008, to set an alternative mileage goal for the interim construction deadline of 370 miles. Pursuant to this authority, the Secretary committed to complete all 670 miles of fencing by December 31, 2008. Of these miles, DHS planned about 370 miles of pedestrian fencing—fencing that prevents people on foot from crossing the border, and about 300 miles of vehicle fencing—barriers used primarily in remote areas to prohibit vehicles engaged in drug trafficking and alien smuggling operations from crossing the border. In September 2008, DHS revised its goal of completing the full 670 miles of fencing by December 31, 2008. As an interim step, DHS committed to have 661 miles either built, under construction, or under contract by December 31, 2008, but did not set a goal for the number of miles it planned to complete by December 31, 2008. As of December 31, 2008, DHS had completed 578 miles of fencing, meeting the interim statutory goal to complete 370 miles of fencing by that time. (See fig. 3 for examples of fencing.) SBInet technology deployments continue to experience delays due to flaws found in testing and potential environmental impacts. User evaluations by Border Patrol agents found that improvements to the new technology that would correct inconsistent system performance needed to be made. SBI officials believed that some issues raised about the technology during user evaluation were a result of the Border Patrol agents’ unfamiliarity with the equipment; however, Border Patrol officials said that they selected agents who were familiar with existing technology and that some training was provided to these agents before testing took place. Until SBInet is deployed, Border Patrol agents continue to rely on existing technology that has limitations such as performance shortfalls and maintenance issues. CBP cannot determine what operational changes it will need to make as a result of the new technology, and Border Patrol will not be able to realize the potential of this technology until it is deployed. Our previous work has shown that CBP’s efforts to deploy SBInet technology across the southwest border have fallen behind its planned schedule. For example, according to the Boeing contract signed in September 2006, an initial set of operational capabilities was planned to be deployed along the entire southwest border in early fiscal year 2009, and a full set of operational capabilities along the southern and northern borders was planned by later in fiscal year 2009. As of the December 2006, the SBInet Expenditure Plan reported that the schedule had changed such that all deployments in the Yuma and Tucson sectors were estimated to be complete by October and December 2008, respectively and the entire southwest border by October 2011. The Expenditure Plan did not provide a time frame for deployment to the northern border. By October 2007, SBI program officials expected to complete all of the first planned deployment of southwest border technology projects in the Tucson, Yuma, and El Paso sectors by the end of calendar year 2008, and deployments in Rio Grande Valley, Laredo, and Del Rio by the end of calendar year 2009. In February 2008, the SBI program office again modified its deployment plans, and reported that the first deployment of technology projects within Block 1 were to take place in two geographic areas within the Tucson sector — designated as Tucson-1 and Ajo-1—by the end of calendar year 2008, with the remainder of deployments to the Tucson, Yuma, and El Paso sectors to be completed by the end of calendar year 2011. Other than the dates for the Tucson, Yuma, and El Paso sectors, no other deployment dates were established for the remainder of the southern or northern borders at that time. We reported in September 2008 on SBInet program uncertainties, including that the program remained ambiguous and in a continued state of flux making it unclear and uncertain what technology capabilities are to be delivered, when and where they are to be delivered, and how they will be delivered. We recommended, among other things, that the CBP Commissioner establish and baseline the specific program commitments, including the specific system functional and performance capabilities, which are to be deployed to the Tucson, Yuma, and El Paso sectors, and establish when these capabilities are to be deployed and are to be operational. Partially in response to our recommendations, in September 2008, the DHS Acquisition Review Board—a departmental executive board that reviews certain acquisitions—required a re-plan of the program. The re-plan was to include, among other things, a revised and detailed program schedule with key milestones. In addition, during the re-plan, a portion of SBInet technology funds were reallocated to fund cost increases associated with the higher priority vehicle and pedestrian fencing. The Acquisition Review Board noted that this reallocation of funds and the desire to include additional field testing would result in a delay of the Tuscon-1 and Ajo-1 deployments. SBI program office officials said that the reallocation of funds was made possible because the program was in the middle of the re-plan which required additional field testing prior to the start of construction in Tucson-1. By December 2008, the SBI program office’s revised schedule showed final acceptance of Tuscon-1 in September 2009, and final acceptance of Ajo-1 in December 2009. By February 2009, the schedule had slipped and final acceptance of Tucson-1 was expected in November 2009 and Ajo-1 in March 2010. Further, our assimilation of available information from multiple program sources, including the Fiscal Year 2009 SBI Expenditure Plan, indicated that deployments throughout the rest of the Tucson and Yuma sectors were to be completed by 2011; deployments in El Paso, Rio Grande Valley, Laredo, Del Rio, San Diego, and El Centro sectors between 2012 and 2015; and deployments in the Marfa sector by 2016. Nevertheless, the timing of planned SBInet deployments continued to slip. As of April 2009, Tuscon-1 was scheduled for final acceptance by December 2009 and Ajo-1 had slipped to June 2010. Our previous work emphasizes that a key aspect of managing large programs like SBInet is having a schedule that defines the sequence and timing of key activities. In addition, our research has identified best practices associated with effective schedule estimating. We have an ongoing review to report separately on SBInet and whether DHS has established a comprehensive, accurate, and realistic schedule that reflects the scope, timing, and sequencing of the work needed to achieve commitments, and which provides key information to DHS and congressional decisionmakers. Figure 4 shows the changes in the planned deployment schedule over time. According to SBI program office officials, the results of testing activities are contributing to the recent delays of Tucson-1 and Ajo-1. For example, one of the changes that resulted from the re-plan was a requirement for additional testing of SBInet technology, which SBI addressed through additional testing performed at a test facility intended to emulate deployment conditions at project sites. SBI program office officials emphasized, and we agree, that testing is a necessary step of deployment and ensures that the technology capabilities perform as required. By February 2009, preliminary results of testing revealed problems that would limit the usefulness of the system for Border Patrol agents, including the instability of the camera under adverse weather conditions, mechanical problems with the radar at the tower, and issues with the sensitivity of the radar. In March 2009, CBP’s Acting Commissioner testified on the testing activity, among other things, stating that although the system did not meet all testing objectives during the December testing, CBP did not perceive “any show-stopper issues.” Based on the testing results, the DHS Acquisition Review Board deferred approval of Tucson-1 equipment installation and Ajo-1 site preparation and equipment installation until the successful resolution of testing objectives which contributed to an Ajo-1 schedule delay of 30 days from April to May 2009. The SBI program office oversaw Boeing’s efforts to re-work and re-test these issues, but as of May 2009, the SBI program office reported that they were still working to address some issues such as difficulties aligning the radar. While involvement was limited for Project 28, SBI program office officials recognized the need to involve intended operators—Border Patrol agents—in Block 1 development, including testing activities. For example, CBP reported using feedback and input from Border Patrol agents to complete detailed plans for tower locations and access roads to support SBInet deployment to the Tucson, Yuma, and El Paso sectors. In addition, from March 27 to April 4, 2009, Border Patrol agents had an opportunity to operate Block 1 technology in a test environment and participate in an early assessment of the suitability and effectiveness of the SBInet technology. The operators’ initial observations included insight comparing the performance capabilities of existing technology—Project 28 and MSS—and new technology—SBInet Block 1 (see fig. 5). For example, the operators indicated that on windy days the Block 1 radar had issues that resulted in an excessive number of false detections and that the capability was not adequate for optimal operational effectiveness. The operators also compared the Project 28, MSS, and Block 1 cameras and indicated that the features of the Block 1 camera were insufficient in comparison to features of the Project 28 and MSS cameras. Overall, the feedback from operators indicated “the need for a number of relatively small, but critical enhancements” to the COP and overall concerns about inconsistent system performance. SBI program officials explained that this assessment was an initial user evaluation. The officials also said that in reviewing the results, they determined that some of the issues raised by the Border Patrol operators occurred because the operators were not familiar with and had not been trained to use the equipment; other issues, such as those with the radar were likely due to incorrect settings across all radars in the test configuration. The Border Patrol said that it selected agents to participate who had experience with the MSSs and/or Project 28 and that the COP operators were given a 2-day course provided by agents familiar with the Block 1 COP prior to the assessment. However, the Border Patrol agreed that the lack of experience with the Block 1 system may have led to some of the issues found during the user evaluation. Nonetheless, because of the agents’ experience with the MSS and Project 28 systems, the Border Patrol said that the issues and concerns generated should be considered operationally relevant. SBI program officials said that operator training is to take place before all Block 1 capabilities are deployed and that additional emphasis is to be placed on ensuring the operators’ familiarity with the equipment. Once all Block 1 capabilities are deployed in Tucson- 1, the Border Patrol is to perform and complete operational testing. This testing is to include insights from the operators’ initial evaluations of the system’s capabilities. Provided there are no additional schedule changes, this testing of Tucson-1 is scheduled to begin in January 2010. Until SBInet capabilities are deployed across the southwest border, Border Patrol agents are using existing capabilities, including Project 28 and legacy equipment supplemented by more recently procured MSS, but all have limitations. As stated previously, Project 28 encountered performance shortfalls and delays. During our site visit to the Tucson sector in March 2009, Border Patrol agents told us, as they had during our previous visits, that the system had improved their operational capabilities, but that they must continue to work around ongoing problems, such as finding good signal strength for the wireless network, remotely controlling cameras, and modifying radar sensitivity. Furthermore, they said, and we observed, that few of the agents were currently using the mobile data terminals installed in 50 of the sector’s vehicles, instead relying on agents operating the COP to relay information about the whereabouts of suspected illegal migrants. One reason agents do not use the mobile data terminals is that it can take up to an hour to log into the system depending on signal strength and because the signal, once gained, is sometimes lost multiple times during a shift. In all southwest border sectors, Border Patrol relies on legacy equipment, such as cameras mounted on towers. In the Tucson and San Diego sectors, Border Patrol agents rely on cameras that have been in place since before calendar year 2000. Border Patrol officials told us that in the three sectors, the cameras have intermittent problems, including signal loss and problems with power and weather. In the Tucson sector, officials noted that the legacy cameras should be updated to gain compatibility with SBInet. To fill gaps or augment the legacy equipment, the SBI program office procured and delivered a total of 40 MSSs. These units were delivered to the Border Patrol’s Tucson sector (23 units), Yuma sector (7 units), and, El Paso sector (8 units) in fiscal year 2008. In addition, a total of 4 units are planned for delivery to the San Diego sector (1 unit) and the northern border (3 units) in fiscal year 2009. During our visit to the Tucson sector in March 2009, we observed a Border Patrol agent using a MSS unit. The agent showed us the radar capabilities including the maximum range, the ability to minimize the range and limit the speed of the radar and cameras, which have a 360 degree view. According to Border Patrol officials, the MSS represents increased operational capabilities for the Border Patrol. However, SBI program officials and Border Patrol noted that at any given time, a unit may not be operational because of the need for repairs. As of April 2009, 15 of the 23 units at the Border Patrol’s Tucson sector were operational. At that time, in the Yuma sector, 4 of the 7 units were operational, although during our visit to the Yuma sector 1 unit was operational. Border Patrol officials explained that in the Yuma sector these units have not worked well because of extreme heat issues. Despite these performance shortfalls, and maintenance issues, agents continue to use existing technology while waiting for the SBInet deployment which will supplement the existing technology. The initial deployment of SBInet technology in the Tucson-1 and Ajo-1 project sites is intended to provide CBP agents and officers a greatly enhanced ability to detect, identify, and classify illegal cross-border activity, as well as facilitate a coordinated response to the activity. These goals directly support the broader SBI goal and Border Patrol strategy to gain effective control of the nation’s borders. While Border Patrol agents have been stakeholders in the development and testing of SBInet technology, Border Patrol officials said that a full assessment of SBInet technology’s impact cannot be made until the technology is in use. Therefore, until technology is in place, CBP is limited in its ability to fully identify and implement operational changes in methods, tactics and approaches, and resources needed to address objectives of the Border Patrol Strategy, and will not be able to realize the potential of this technology in its efforts to secure the border. The deployment of 661 miles of tactical infrastructure projects along the southwest border is nearing completion, but delays persist, due mainly to property acquisition issues. In addition, per mile costs, which had climbed substantially, are now less likely to change because contracts for the 661 miles of fence have been awarded. CBP plans to complete 10 more miles of fencing using fiscal year 2009 funds, and fiscal year 2010 and 2011 funds are to be used primarily for supporting infrastructure. A life cycle cost study has been completed which estimates deployment, operations, and future maintenance for the tactical infrastructure will total $6.5 billion. Despite the investment in tactical infrastructure, its impact on securing the border has not been measured because DHS has not assessed the impact of the tactical infrastructure on gains or losses in the level of effective control. CBP is close to accomplishing its goal to build 661 miles of fencing along the southwest border. As of June 2009, 633 miles had been completed (see table 2). CBP was scheduled to complete the remaining 28 miles by November 2009. However, fence deployment continues to face delays due to challenges in constructing tactical infrastructure on difficult terrain and acquiring the necessary property rights from landowners. For example, in the San Diego sector, one 3.6 mile tactical infrastructure project previously scheduled to be completed by December 2008 and now due to be completed by October 2009, involves construction on rugged mountainous terrain that is not easily accessible. According to tactical infrastructure officials, they realized before December 2008 that it would not be possible to complete this segment until October 2009 because of these factors. In addition, as of June 29, 2009, fence projects totaling about 20 miles in the Rio Grande Valley sector with originally planned completion dates of December 2008 are now scheduled for completion by October 2009, with the exception of one segment, because of litigation related to property acquisition that was not resolved in time to meet the original dates. The segment that will not be complete by October 2009 was delayed due to difficulties obtaining materials for the bridge construction associated with the segment. As a result, this segment is anticipated to be completed in November 2009. As of June 29, 2009, of an estimated 96 cases where the government sued to acquire property through condemnation proceedings because the landowner would not voluntarily sell to the government, the property associated with 39 of those cases had yet to be acquired. However, of the 39 cases, 7 are required to be settled to complete fence construction. The remaining 32 properties are being sought in anticipation of future fencing needs and for other purposes, such as operations and maintenance of the fence. Nonetheless, the U.S. Army Corps of Engineers (USACE) officials said that completion of fencing construction projects usually takes 90 to 120 days. Because the properties have yet to be acquired, the October 2009 projected completion date is likely to slip. While fencing costs increased over the course of construction, because all construction contracts have been awarded, cost estimates are less likely to change. Fencing miles completed as of October 31, 2008, cost an average of $3.9 million per mile for pedestrian fencing and $1.0 million per mile for vehicle fencing. However, once contracts were awarded, the average per mile costs had increased to $6.5 million per mile for pedestrian fencing and $1.8 million per mile for vehicle fencing. Tactical infrastructure program officials said the per mile costs increased over time due to various factors, such as property acquisition costs incurred for these miles that were not a factor for many of the previous miles and costs for labor and materials increased. Also, as we reported in September 2008, as tactical infrastructure officials were in the process of finalizing construction contracts, cost estimates for pedestrian fencing in Texas began to increase. Tactical infrastructure program office officials attributed the cost increases to a short supply of labor and materials, as well as the compressed timeline. For example, the officials said that as a result of a construction boom in Texas, labor was in short supply and contractors reported that they needed to provide premium pay and overtime to attract workers. In terms of materials, USACE officials stated that the price of cement and steel had increased and in some areas within Texas obtaining cement near the fence construction site was difficult. Tactical infrastructure program office officials said that they worked to mitigate the cost increases where possible. For example, they said that although their decision to purchase steel in bulk was made to ensure its availability, the purchase also resulted in savings. Tactical infrastructure program office officials said that based on data showing that the price of steel products almost doubled from January 2008 through August 2008, they estimate that they saved over $72 million with the bulk steel purchase. However, due to the construction delays, the tactical infrastructure program office has had to extend the contract for storage of the steel, and is to soon begin negotiations for a long-term storage contract. The need to continue to store the leftover steel will result in increased costs. Despite these additional costs, tactical infrastructure program office officials said that, according to their estimates, they will still realize cost savings on their bulk steel purchase. In addition, the officials estimated that there will be approximately 25,000 tons of steel remaining after all fencing segments are built. They said it will be used if additional fencing is built and will be used to maintain the fencing already deployed. Ten miles of additional fencing is scheduled to be built with fiscal year 2009 funds, and fiscal years 2010 and 2011 funds are planned to be used primarily for supporting infrastructure. For fiscal years 2009 and 2010, $110 million has been allocated to tactical infrastructure. With the fiscal year 2009 funding, the tactical infrastructure program office plans to construct approximately 3 miles of vehicle fence in the Tucson sector and about 7 miles of pedestrian fence in the Marfa, Rio Grande Valley, and El Paso sectors. The program office also plans to use the funding for enhancements to existing fencing, such as gates and canal crossovers, and real estate planning and acquisition for fiscal year 2010 projects. Due to the long lead time associated with real estate acquisition, DHS also plans to use fiscal year 2009 funds to conduct real estate planning and acquisition activities for projects slated for completion in fiscal years 2010 and 2011. By conducting real estate activities 1 to 2 years in advance, CBP seeks to limit construction delays due to lack of real estate. Also, as of June 2009, the program office had obligated about $21 million of its fiscal year 2009 funds for additional costs caused by construction delays and changes on projects under way. With fiscal year 2010 funds, plans as of June 2009 include replacing surf fencing and constructing all-weather roads and lighting in the San Diego sector; constructing bridges, a third layer of fencing and lighting in the El Centro sector; and clearing brush in the Yuma sector. For fiscal year 2011, plans as of June 2009 were to, among other things, construct all-weather roads in the El Paso and Del Rio sectors; and construct roads, bridges, and low-water crossings and to clear brush in the Laredo sector. The summary of a life-cycle cost study prepared by a contractor for CBP shows that total life-cycle costs for all tactical infrastructure constructed to date, including pre-SBI infrastructure as well as that planned for fiscal years 2009, 2010, and 2011, are estimated at about $6.5 billion. The life- cycle cost estimates include deployment and operations and future maintenance costs for all tactical infrastructure, including the fence, roads, and lighting, among other things. Previously, CBP had reported that the fence is to have a lifespan of approximately 20 years, and plans to obligate $75 million to operations and maintenance of the fence for fiscal year 2009, and again requested $75 million for fiscal year 2010. A significant use of the operations and maintenance funding is to repair breaches in the fence. According to tactical infrastructure program office data, as of May 14, 2009, there had been 3,363 breaches in the fence, with each breach costing an average of $1,300 to repair. Because of its construction, the older pre-SBI fencing is easier to breach and most breaches occurred in these types of fencing. Of the newer fencing, the fewest breaches occurred in the bollard-style fencing, while more occurred in the wire mesh fence. Examples of breaches are shown in figure 6. CBP reported that tactical infrastructure, coupled with additional trained Border Patrol agents, had increased the miles of the southwest border under effective control, but despite a $2.4 billion investment, it cannot account separately for the impact of tactical infrastructure. DHS defines effective control of the U.S. borders as the ability to consistently (1) detect illegal entries into the United States between the port of entry, (2) identify and classify these entries to determine the level of threat involved, (3) effectively respond to these entries, and (4) bring events to a satisfactory law enforcement resolution. Border Patrol personnel, technology, and tactical infrastructure are the contributing elements to effective control. CBP measures miles under effective control through Border Patrol’s quarterly assessments using information on apprehensions; vehicle drive- through traffic; and, intelligence, operational reports, and the experience and expertise of senior Border Patrol agents, among other things. CBP recognizes that its measure of effective control is limited in that its source relies partially on subjective information and it does not reflect all CBP efforts along the border. CBP officials report that they are working to create a CBP-wide border control measure to inform resource decision making, but are having difficulty determining appropriate data sources and the appropriate measure and, therefore, have not set a date for completion of this measure. According to CBP’s Fiscal Year 2008 Performance and Accountability Report, 757 of the 8,607 miles the Border Patrol is responsible for were under effective control, increasing the miles under effective control by 158 over those miles controlled in fiscal year 2007. According to the Fiscal Year 2009 SBI Expenditure Plan, between fiscal years 2007 and 2008, an additional 36 miles in the Tucson sector were under effective control partially as a result of added tactical infrastructure. In the Yuma sector where some of the early SBI fencing was constructed, apprehensions were down 78 percent in fiscal year 2008 compared with fiscal year 2007. CBP reported that apprehensions declined partially because of the fencing and also because of non-fencing reasons, such as the increase in Border Patrol agents during fiscal year 2008. In addition, CBP reported that as a direct result of increased tactical infrastructure, vehicle drive-through traffic declined from 213 incursions in fiscal year 2007 to 2 in fiscal year 2008. Overall, the Yuma sector’s vehicle drive-through traffic declined by 50 percent and the number of miles under effective control for the sector climbed from 70 in fiscal year 2007 to 118 of the sector’s 125 miles in fiscal year 2008. In the San Diego sector, 3 miles of effective control were gained between fiscal years 2007 and 2008, and apprehensions were up 7 percent. Table 3 shows the changes in effective control for these three sectors from fiscal year 2007 to fiscal year 2008. However, Border Patrol data show that apprehensions for all southwest border sectors except San Diego also declined between fiscal years 2006 and 2007, before the majority of the tactical infrastructure was deployed. Therefore, the impact of tactical infrastructure on apprehensions is unclear as there are other factors that could contribute to the decline. For example, in its Fiscal Year 2008 4th Quarter Congressional Status Report on Border Security and Resources, CBP stated that the end of “catch and release,” increases in Border Patrol agents, more tactical infrastructure on the border, expanded use of expedited removal, and support from the National Guard during Operation Jump Start have had a significant deterrent effect, contributing to the marked decline in apprehensions. Other factors, such as the decreasing number of migrants attempting to cross the border due to the economy may also have impacted apprehensions. CBP has not systematically evaluated the impact of tactical infrastructure on gains or losses in the level of effective border control, controlling for the influences of other potential factors on border control efforts. The current performance measure for tactical infrastructure is miles constructed. While this measure provides useful information it does not demonstrate the program’s discrete contribution to effective control. In addition, CBP has, as part of its Fiscal Year 2009 SBI Expenditure Plan, completed an analysis of each tactical infrastructure segment to be built compared to other, alternative means of achieving effective control such as investments in technology and enforcement personnel. This analysis was intended to show where physical fencing was most appropriate given cost, level of effective control, possible unintended effects on communities, and other critical factors. However, these analyses were largely subjective because they were based primarily on the experience and expertise of senior border patrol agents. Federal agencies are increasingly expected to focus on achieving results and to demonstrate, in annual performance reports and budget requests, how their activities help achieve agency or governmentwide goals. The Government Performance and Results Act of 1993 (GPRA) requires federal agencies to report annually on their achievement of performance goals, explain why any goals were not met, and summarize the findings of any program evaluations conducted during the year. For programs that have readily observable results or outcomes, performance measurement may provide sufficient information to demonstrate program results. In some programs, however, outcomes are not quickly achieved or readily observed, or their relationship to the program is uncertain. In such cases, program evaluations may be needed, in addition to performance measurement, to examine the extent to which a program is achieving its objectives. Our previous work identified program evaluations as a way for agencies to explore the benefits of a program as well as ways to improve program performance. An evaluation of the tactical infrastructure already deployed along the southwest border would help demonstrate its contribution to effective control of the border and help CBP to determine whether more tactical infrastructure would be appropriate, given other alternatives and constraints. For instance, a statistical analysis could be conducted to show the effect of tactical infrastructure within each sector and throughout the southwest border, controlling for other potential factors. This analysis could include, among other data, apprehension data and data on illegal migrants’ and smugglers’ methods, routes, and modes of transportation before and after tactical infrastructure deployment. CBP could use the information collected during program evaluations to complement its performance measurement data and thereby more fully assess these often difficult-to-measure activities and to inform its efforts to improve its performance measures. Our work has shown that analyses such as these further complement performance management initiatives and are useful to inform resource decision making and in helping to effectively implement performance measures. CBP officials said that they would like to conduct a study, but lack the resources. In our previous work, we found that through a number of strategies, agencies developed and maintained a capacity to produce and use evaluations. First, to leverage their evaluation resources and expertise, agencies engaged in collaborations or actively educated and solicited the support and involvement of their program partners and stakeholders. Second, agency managers sustained a commitment to accountability and to improving program performance. Third, they improved administrative systems or turned to special data collections to obtain better quality data. Finally, they sought out—through external sources or development of staff—whatever expertise was needed to ensure the credibility of analyses and conclusions. Furthermore, in our efforts to assist agencies’ program evaluation efforts, we identified agencies that initiated evaluation studies resulting in recommendations to address program performance and a strategy for the future. The evaluations conducted by these agencies helped them improve their measurement of program performance or understanding of performance and how it might be improved, or both. Accordingly, information gained through an evaluation may help CBP more effectively allocate its limited resources, inform its future decisions about investing in tactical infrastructure, and ensure that existing tools are adequately supported and maintained. Such an evaluation would also help CBP determine whether the tactical infrastructure it has deployed meets the mandate in the Secure Fence Act of 2006, as amended, to use physical infrastructure enhancements to help prevent unlawful U.S. entries; facilitate access by CBP personnel to enable a rapid and effective response to illegal activities; and help DHS and CBP achieve and maintain operational control of U.S. borders. Until CBP determines the contribution of tactical infrastructure to border security beyond a measure of miles covered by tactical infrastructure, it is not positioned to address the impact this costly resource has had in each sector or might have if deployed in other locations across the southwest border. While the SBInet program continues to test and evaluate potential technology applications, a major part of DHS’s effort to secure the nation’s borders from the illegal entry of aliens and contraband has been the deployment of tactical infrastructure. Along with technology and additional Border Patrol personnel, CBP relies on tactical infrastructure to help gain and maintain effective control of the border. Controlling, managing, and securing the border were the principal purposes of the mandate to construct fencing along the southwest border. Deploying this infrastructure has been expensive and costs have risen during its construction. However, despite a $2.4 billion investment in this infrastructure, its contribution to effective control of the border has not been measured because CBP has not evaluated the impact of tactical infrastructure on gains or losses in the level of effective control. Given the large investment made in tactical infrastructure and to help CBP more effectively allocate its limited resources, inform future decisions about whether to build more fencing, and ensure that existing tools are adequately supported and maintained, it is important that CBP assess the impact of tactical infrastructure on effective control as it examines the costs and benefits of different methods of deterrence. To improve the quality of information available to allocate resources and determine tactical infrastructure’s contribution to effective control of the border, we recommend that the Commissioner of CBP conduct a cost- effective evaluation of the impact of tactical infrastructure on effective control of the border. We provided a draft of this report to the Department of Homeland Security for its review and comment. In an August 31, 2009, letter, the Department of Homeland Security provided written comments, which are summarized below and included in appendix III. The department stated that it agrees with our recommendation and generally concurred with our report, but said that the report does not acknowledge some of the significant factors that have contributed to program volatility and delays. With respect to our recommendation, DHS concurred and described actions recently completed, underway, and planned that it said will address our recommendation to conduct a cost-effective evaluation of the impact of tactical infrastructure on effective control of the border. DHS commented that its Office of Border Patrol was already committed to examining evaluation options, as evidenced by the Office of Border Patrol’s completion of analyses of alternatives to guide field personnel through the process of considering and determining what and how much infrastructure would be most effective. We discuss the analyses of alternatives in our report, as well as the fact that they are largely subjective because they were based primarily on the experience and expertise of senior Border Patrol agents. DHS also commented that it is considering using independent researchers to conduct evaluations and using modeling and simulation technology to gauge the effects of resource deployments. We believe that such efforts would be consistent with our recommendation, further complement performance management initiatives, and be useful to inform resource decision making. In its technical comments, DHS elaborated on some of the significant factors that have contributed to program volatility and delays. DHS stated that although SBI has experienced performance issues that have delayed Block 1 deployment, there have been other significant factors that have had an impact on the program schedule, such as their decision to reallocate funds to higher priority fencing projects, and external pressures—such as the need to obtain environmental clearances for tower placement. Our report included the environmental issues as a contributing factor to the delays. We have added information to our report to reflect the decision to reallocate funds. These reallocations and environmental issues notwithstanding, SBI program office officials told us that the program was not ready to use the funding that was reallocated in fiscal year 2008 due to the additional testing that needed to take place before deployment. We were unable to reprint DHS’s technical comments in this report because they contain sensitive information; however, we have incorporated them into the report, as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies of this report to the Senate and House committees and subcommittees that have authorization and oversight responsibilities for homeland security. We will also send copies of this report to the Secretary of Homeland Security, the Commissioner of U.S. Customs and Border Protection, and the Office of Management and Budget. In addition, this report will be available at no cost on the GAO Web site at http://www.gao.gov. Should your offices have any questions on matters discussed in this report, please contact me at (202) 512-8777 or at [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. The SBI program office has been reorganized, developed new staffing goals, and completed a new human capital plan for fiscal years 2009 through 2010; however, meeting the plan’s revised human capital goals may be difficult. Under the new organizational structure, the tactical infrastructure program office has moved to the CBP Office of Finance’s Facilities Management and Engineering division and the SBI program office has been restructured. The restructuring of the SBI program office involved placing a greater emphasis on contractor oversight and creation of offices of operational integration, business management operations, and systems engineering, in addition to the SBInet program office. The SBI program’s Executive Director’s goal is to have a total of 236 employees— 181 full-time government employees and 55 contractors—in place by March 2010. He said that the goal to have 236 employees represents the number needed to move forward with the program based on his previous experience and the need to have government employees representing key procurement competencies, meaning an increase in the ratio of government employees to contractors. For example, as of May 31, 2009, SBI program office staffing consisted of a total of 167 employees—72 government and 95 contractors, or a ratio of 1.3 contractors to each government employee. The new staffing goal calls for a ratio of 3.3 government employees to each contractor. The SBI Executive Director said that having more government employees is important because he wants more in-house expertise to oversee the contractors. According to the SBI Executive Director, increasing the ratio of government employees to contractors in the SBI program office may be difficult because of a shortage of some personnel, such as systems engineers. He said he anticipates hiring 8 government employees a month, but acknowledges that it may take between 4 and 6 months to bring new hires on board. In the meantime, he said the SBI office will continue to supplement its workforce with contract support staff. In December 2008, the second version of its Strategic Human Capital Management Plan was provisionally certified and as of June 2009, the SBI program office continued to implement the plan. The new version of the human capital plan spans 2 fiscal years, reflecting a longer-term staffing vision for SBI. The SBI program office’s plan outlines seven main goals for the office and includes planned activities to accomplish those goals, which align to federal government best practices. As of May 2009, the SBI program office had taken several steps to implement the plan. For example, the SBI program office had completed a training plan which was undergoing review and had tentatively selected 43 candidates to fill 70 vacancies. In addition, the program office had finalized an awards and recognition policy and had implemented the policy. However, the SBI program office had deferred completion of its succession management plan until the final quarter of fiscal year 2009. To implement and review the human capital plan, the SBI program office is partnering with the DHS Chief Human Capital Officer’s office as well as CBP’s Office of Human Resources. In a December 8, 2008, letter that accompanied CBP’s Fiscal Year 2009 SBI Expenditure Plan, the Chief Human Capital Officer noted that the SBI human capital plan provided specific initiatives to address hiring, development, and retention of employees, and described metrics to measure progress and results of these initiatives. However, the Chief Human Capital Officer also noted that human capital management challenges remain. For example, according to the letter, competition for qualified employees could present staffing challenges for SBI in achieving its goals to hire additional program managers, auditors, engineers, and environmental specialists and to shift the current ratio of contractors to federal employees and hire more federal employees and fewer contractors. Furthermore, still to be determined succession management plans and finalization of the training plan reflect unfinished human capital planning efforts. This gap in planning could present challenges in training employees and preparing for a longer-term SBI vision. The letter noted that the DHS Chief Human Capital Officer planned to reevaluate SBI’s human capital plan in May 2009 to ensure that SBI was on track to achieve its staffing goals. According to the SBI Executive Director, this review is ongoing through a series of meetings and data exchanges. Table 4 summarizes the seven human capital goals, and the SBI program office’s planned activities and steps taken to accomplish these activities, as of May 2009. This is the maximum value of the task order. For example, the Integrated Logistics Support task order has a “ceiling” of $35.3 million; however, at this time, obligations under the task order are only $26.7 million because the project is being incrementally funded to complete work in periods. In addition to the contact named above, Susan Quinlan, Assistant Director, and Jeanette Espinola, Assistant Director, managed this assignment. Sylvia Bascopé, Claudia Becker, Frances Cook, Christine Davis, Katherine Davis, Jeremy Rothgerber, Erin Smith, and Meghan Squires made significant contributions to the work. U.S. Customs and Border Protection’s Secure Border Initiative Fiscal Year 2009 Expenditure Plan. GAO-09-274R. Washington, D.C.: April 30, 2009. Secure Border Initiative Fence Construction Costs. GAO-09-244R. Washington, D.C.: January 29, 2009. Secure Border Initiative: DHS Needs to Address Significant Risks in Delivering Key Technology Investment. GAO-08-1086. Washington, D.C.: September 22, 2008. Secure Border Initiative: Observations on Deployment Challenges. GAO-08-1141T. Washington, D.C.: September 10, 2008. Secure Border Initiative: DHS Needs to Address Significant Risks in Delivering Key Technology Investment. GAO-08-1148T. Washington, D.C.: September 10, 2008. Secure Border Initiative: Fiscal Year 2008 Expenditure Plan Shows Improvement, but Deficiencies Limit Congressional Oversight and DHS Accountability. GAO-08-739R. Washington, D.C.: June 26, 2008. Department of Homeland Security: Better Planning and Oversight Needed to Improve Complex Service Acquisition Outcomes. GAO-08-765T. Washington, D.C.: May 8, 2008. Department of Homeland Security: Better Planning and Assessment Needed to Improve Outcomes for Complex Service Acquisitions GAO-08-263. Washington, D.C.: April 22, 2008. Secure Border Initiative: Observations on the Importance of Applying Lessons Learned to Future Projects. GAO-08-508T. Washington, D.C.: February 27, 2008. Secure Border Initiative: Observations on Selected Aspects of SBInet Program Implementation. GAO-08-131T. Washington, D.C.: October 24, 2007. Secure Border Initiative: SBInet Planning and Management Improvements Needed to Control Risks. GAO-07-504T. Washington, D.C.: February 27, 2007. Secure Border Initiative: SBInet Expenditure Plan Needs to Better Support Oversight and Accountability. GAO-07-309. Washington, D.C.: February 15, 2007.
Securing the nation's borders from illegal entry of aliens and contraband, including terrorists and weapons of mass destruction, continues to be a major challenge. In November 2005, the Department of Homeland Security (DHS) announced the launch of the Secure Border Initiative (SBI), a multiyear, multibillion dollar program aimed at securing U.S. borders and reducing illegal immigration. Within DHS, U.S. Custom and Border Protection's (CBP) SBI program is responsible for developing a comprehensive border protection system using technology, known as SBInet, and tactical infrastructure--fencing, roads, and lighting. GAO was asked to provide periodic updates on the status of the program. This report addresses (1) the extent to which CBP has implemented SBInet and the impact of delays that have occurred, and (2) the extent to which CBP has deployed tactical infrastructure and assessed its results. To do this work, GAO reviewed program schedules, status reports, and previous GAO work; interviewed DHS and CBP officials, among others; and visited three SBI sites where initial technology or fencing had been deployed at the time of GAO's review. SBInet technology capabilities have not yet been deployed and delays require Border Patrol, a CBP component, to rely on existing technology for securing the border, rather than using newer technology planned to overcome the existing technology's limitations. Flaws found in testing and concerns about the impact of placing towers and access roads in environmentally sensitive locations caused delays. As of September 2006, SBInet technology deployment for the southwest border was planned to be complete by early fiscal year 2009. When last reported in February 2009, the completion date had slipped to 2016. As a result of such delays, Border Patrol agents continue to use existing technology that has limitations, such as performance shortfalls and maintenance issues. For example, on the southwest border, Border Patrol relies on existing equipment such as cameras mounted on towers that have intermittent problems, including signal loss. Border Patrol has procured and delivered some new technology to fill gaps or augment existing equipment. However, incorporating SBInet technology as soon as it is operationally available should better position CBP to identify and implement operational changes needed for securing the border. Tactical infrastructure deployments are almost complete, but their impact on border security has not been measured. As of June 2009, CBP had completed 633 of the 661 miles of fencing it committed to deploy along the southwest border. However, delays continue due mainly to challenges in acquiring the necessary property rights from landowners. While fencing costs increased over the course of construction, because all construction contracts have been awarded, costs are less likely to change. CBP plans to use $110 million in fiscal year 2009 funds to build 10 more miles of fencing, and fiscal year 2010 and 2011 funds for supporting infrastructure. CBP reported that tactical infrastructure, coupled with additional trained agents, had increased the miles of the southwest border under control, but despite a $2.4 billion investment, it cannot account separately for the impact of tactical infrastructure. CBP measures miles of tactical infrastructure constructed and has completed analyses intended to show where fencing is more appropriate than other alternatives, such as more personnel, but these analyses were based primarily on the judgment of senior Border Patrol agents. Leading practices suggest that a program evaluation would complement those efforts. Until CBP determines the contribution of tactical infrastructure to border security, it is not positioned to address the impact of this investment.
Crude oil prices are a major determinant of gasoline prices. As figure 1 shows, crude oil and gasoline prices have generally followed a similar path over the past three decades and have risen considerably over the past few years. Also, as is the case for most goods and services, changes in the demand for gasoline relative to changes in supply affect the price that consumers pay. In other words, if the demand for gasoline increases faster than the ability to supply it, the price of gasoline will most likely increase. In 2006, the United States consumed an average of 387 million gallons of gasoline per day. This consumption is 59 percent more than the 1970 average per day consumption of 243 million gallons—an average increase of about 1.6 percent per year for the last 36 years. As we have shown in a previous GAO report, most of the increased U.S. gasoline consumption over the last two decades has been due to consumer preference for larger, less-fuel efficient vehicles such as vans, pickups, and SUVs, which have become a growing part of the automotive fleet. Refining capacity and utilization rates also play a role in determining gasoline prices. Refinery capacity in the United States has not expanded at the same pace as demand for gasoline and other petroleum products in recent years. According to FTC, no new refinery still in operation has been built in the U.S. since 1976. As a result, existing U.S. refineries have been running at very high rates of utilization averaging 92 percent since the 1990s, compared to about an average of 78 percent in the 1980s. Figure 2 shows that since 1970 utilization has been approaching the limits of U.S. refining capacity. Although the average capacity of existing refineries has increased, refiners have limited ability to increase production as demand increases. While the lack of spare refinery capacity may contribute to higher refinery margins, it also increases the vulnerability of gasoline markets to short-term supply disruptions that could result in price spikes for consumers at the pump. Although imported gasoline could mitigate short-term disruptions in domestic supply, the fact that imported gasoline comes from farther away than domestic supply means that when supply disruptions occur in the United States it might take longer to get replacement gasoline than if we had spare refining capacity in the United States. This could mean that gasoline prices remain high until the imported supplies can reach the market. Further, gasoline inventories maintained by refiners or marketers of gasoline can also have an impact on prices. As have a number of other industries, the petroleum industry has adopted so-called “just-in-time” delivery processes to reduce costs leading to a downward trend in the level of gasoline inventories in the United States. For example, in the early 1980s U.S. oil companies held stocks of gasoline of about 40 days of average U.S. consumption, while in 2006 these stocks had decreased to 23 days of consumption. While lower costs of holding inventories may reduce gasoline prices, lower levels of inventories may also cause prices to be more volatile because when a supply disruption occurs, there are fewer stocks of readily available gasoline to draw from, putting upward pressure on prices. Regulatory factors play a role as well. For example, in order to meet national air quality standards under the Clean Air Act, as amended, many states have adopted the use of special gasoline blends—so-called “boutique fuels.” As we reported in a recent study, there is a general consensus that higher costs associated with supplying special gasoline blends contribute to higher gasoline prices, either because of more frequent or more severe supply disruptions, or because higher costs are likely passed on, at least in part, to consumers. Furthermore, changes in regulatory standards generally make it difficult for firms to arbitrage across markets because gasoline produced according to one set of specifications may not meet another area’s specifications. Finally, market consolidation in the U.S. petroleum industry through mergers can influence the prices of gasoline. Mergers raise concerns about potential anticompetitive effects because mergers could result in greater market power for the merged companies, either through unilateral actions of the merged companies or coordinated interaction with other companies, potentially allowing them to increase and maintain prices above competitive levels. On the other hand, mergers could also yield cost savings and efficiency gains, which could be passed on to consumers through lower prices. Ultimately, the impact depends on whether the market power or the efficiency effects dominate. During the 1990s, the U.S. petroleum industry experienced a wave of mergers, acquisitions, and joint ventures, several of them between large oil companies that had previously competed with each other for the sale of petroleum products. More than 2,600 merger transactions occurred from 1991to 2000 involving all segments of the U.S. petroleum industry. These mergers contributed to increases in market concentration in the refining and marketing segments of the U.S. petroleum industry. Econometric modeling we performed of eight mergers involving major integrated oil companies that occurred in the 1990s showed that the majority resulted in small but significant increases in wholesale gasoline prices. The effects of some of the mergers were inconclusive, especially for boutique fuels sold in the East Coast and Gulf Coast regions and in California. While we have not performed modeling on mergers that occurred since 2000, and thus cannot comment on any potential effect on wholesale gasoline prices at this time, these mergers would further increase market concentration nationwide since there are now fewer oil companies. Some of the mergers involved large partially or fully vertically integrated companies that previously competed with each other. For example, in 1998 British Petroleum (BP) and Amoco merged to form BPAmoco, which later merged with ARCO, and in 1999 Exxon, the largest U.S. oil company merged with Mobil, the second largest. Since 2000, we found that at least 8 large mergers have occurred. Some of these mergers have involved major integrated oil companies, such as the Chevron-Texaco merger, announced in 2000, to form ChevronTexaco, which went on to acquire Unocal in 2005. In addition, Phillips and Tosco announced a merger in 2001 and the resulting company, Phillips, then merged with Conoco to become ConocoPhillips. Independent oil companies have also been involved in mergers. For example, Devon Energy and Ocean Energy, two independent oil producers, announced a merger in 2003 to become the largest independent oil and gas producer in the United States at that time. Petroleum industry officials and experts we contacted cited several reasons for the industry’s wave of mergers since the 1990s, including increasing growth, diversifying assets, and reducing costs. Economic literature indicates that enhancing market power is also sometimes a motive for mergers, which could reduce competition and lead to higher prices. Ultimately, these reasons mostly relate to companies’ desire to maximize profits or stock values. Proposed mergers in all industries are generally reviewed by federal antitrust authorities—including the Federal Trade Commission (FTC) and the Department of Justice (DOJ)—to assess the potential impact on market competition and consumer prices. According to FTC officials, FTC generally reviews proposed mergers involving the petroleum industry because of the agency’s expertise in that industry. To help determine the potential effect of a merger on market competition, FTC evaluates, among other factors, how the merger would change the level of market concentration. Conceptually, when market concentration is higher, the market is less competitive and it is more likely that firms can exert control over prices. DOJ and FTC have jointly issued guidelines to measure market concentration. The scale is divided into three separate categories: unconcentrated, moderately concentrated, and highly concentrated. The index of market concentration in refining increased all over the country during the 1990s, and changed from moderately to highly concentrated on the East Coast. In wholesale gasoline markets, market concentration increased throughout the United States between 1994 and 2002. Specifically, 46 states and the District of Columbia had moderately or highly concentrated markets by 2002, compared to 27 in 1994. To estimate the effect of mergers on wholesale gasoline prices, we performed econometric modeling on eight mergers that occurred during the 1990s: Ultramar Diamond Shamrock (UDS)-Total, Tosco-Unocal, Marathon-Ashland, Shell-Texaco I (Equilon), Shell-Texaco II (Motiva), BP- Amoco, Exxon-Mobil, and Marathon Ashland Petroleum (MAP)-UDS. For the seven mergers that we modeled for conventional gasoline, five led to increased prices, especially the MAP-UDS and Exxon-Mobil mergers, where the increases generally exceeded 2 cents per gallon, on average. For the four mergers that we modeled for reformulated gasoline, two— Exxon-Mobil and Marathon-Ashland—led to increased prices of about 1 cent per gallon, on average. In contrast, the Shell-Texaco II (Motiva) merger led to price decreases of less than one-half cent per gallon, on average, for branded gasoline only. For the two mergers—Tosco-Unocal and Shell-Texaco I (Equilon)—that we modeled for gasoline used in California, known as California Air Resources Board (CARB) gasoline, only the Tosco-Unocal merger led to price increases. The increases were for branded gasoline only and were about 7 cents per gallon, on average. Our analysis shows that wholesale gasoline prices were also affected by other factors included in the econometric models, including gasoline inventories relative to demand, supply disruptions in some parts of the Midwest and the West Coast, and refinery capacity utilization rates. Our past work has shown that, the price of crude oil is a major determinant of gasoline prices along with changes in demand for gasoline. Limited refinery capacity and the lack of spare capacity due to high refinery capacity utilization rates, decreasing gasoline inventory levels and the high cost and changes in regulatory standards also play important roles. In addition, merger activity can influence gasoline prices. During the 1990s, mergers decreased the number of oil companies and refiners and our findings suggest that these changes in the state of competition in the industry caused wholesale prices to rise. The impact of more recent mergers is unknown. While we have not performed modeling on mergers that occurred since 2000, and thus cannot comment on any potential effect on wholesale gasoline prices at this time, these mergers would further increase market concentration nationwide since there are now fewer oil companies. We are currently in the process of studying the effects of the mergers that have occurred since 2000 on gasoline prices as a follow up to our previous report on mergers in the 1990s. Also, we are working on a separate study on issues related to petroleum inventories, refining, and fuel prices. With these and other related work, we will continue to provide Congress the information needed to make informed decisions on gasoline prices that will have far-reaching effects on our economy and our way of life. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or the other Members of the Committee may have at this time. For further information about this testimony, please contact me at (202) 512-2642 ([email protected]) or Mark Gaffigan at (202) 512-3841 ([email protected]). Godwin Agbara, John Karikari, Robert Marek, and Mark Metcalfe made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Few issues generate more attention and anxiety among American consumers than the price of gasoline. The most current upsurge in prices is no exception. According to data from the Energy Information Administration (EIA), the average retail price of regular unleaded gasoline in the United States has increased almost every week this year since January 29th and reached an all-time high of $3.10 the week of May 14th. Over this time period, the price has increase 94 cents per gallon and added about $20 billion to consumers' total gasoline bill, or about $146 for each passenger car in the United States. Given the importance of gasoline for the nation's economy, it is essential to understand the market for gasoline and the factors that influence gasoline prices. In this context, this testimony addresses the following questions: (1) what key factors affect the prices of gasoline and (2) what effects have mergers had on market concentration and wholesale gasoline prices? To address these questions, GAO relied on previous reports, including a 2004 GAO report on mergers in the U.S. petroleum industry, a 2005 GAO primer on gasoline prices and a 2006 testimony. GAO also collected updated data from EIA. This work was performed in accordance with generally accepted government auditing standards. The price of crude oil is a major determinant of gasoline prices. However, a number of other factors also affect gasoline prices including (1) increasing demand for gasoline; (2) refinery capacity in the United States that has not expanded at the same pace as the demand for gasoline; (3) a declining trend in gasoline inventories and (4) regulatory factors, such as national air quality standards, that have induced some states to switch to special gasoline blends. Consolidation in the petroleum industry plays a role in determining gasoline prices as well. For example, mergers raise concerns about potential anticompetitive effects because mergers could result in greater market power for the merged companies, potentially allowing them to increase and sustain prices above competitive levels; on the other hand, these mergers could lead to efficiency effects enabling the merged companies to lower prices. The 1990s saw a wave of merger activity in which over 2600 mergers occurred in all segments of the U.S. petroleum industry. This wave of mergers contributed to increases in market concentration in the refining and marketing segments of the U.S. petroleum industry. Econometric modeling that GAO performed on eight of these mergers showed that, after controlling for other factors including crude oil prices, the majority resulted in wholesale gasoline price increases--generally between about 1 and 7 cents per gallon. While these price increases seem small, they are not trivial because according to the Federal Trade Commission's (FTC) standards for merger review in the petroleum industry, a 1-cent increase is considered to be significant. Additional mergers occurring since 2000 are expected to increase the level of industry concentration further, and because GAO has not yet performed modeling on these mergers, we cannot comment on any potential effect on gasoline prices at this time. However, we are currently in the process of studying the effects of the mergers that have occurred since 2000 on gasoline prices as a follow up to our previous work on mergers in the 1990s. Also, we are working on a separate study on issues related to petroleum inventories, refining, and fuel prices.
To ensure the safety, security, and reliability of the nation’s nuclear weapons stockpile, NNSA relies on contractors who manage and operate government-owned laboratories, production plants, and a test site. The number of workers and facilities involved in the nuclear weapons program has changed since the program began in the early 1940s at various locations, such as the Los Alamos National Laboratory in New Mexico. Each facility performs a different function, all collectively working toward fulfilling NNSA’s nuclear weapons related mission. Figure 1 shows the locations of the facilities and describes their functions. Historically, confidence in the safety and reliability of the nuclear stockpile derived, in part, from underground live testing of nuclear weapons. In 1992, at the end of the Cold War, the United States scaled back its operations, ceased live testing of nuclear weapons, and adopted the Stockpile Stewardship Program as an alternative to testing. The Stockpile Stewardship Program focuses on obtaining a wide range of data through nonnuclear tests, computer modeling, experimentation, and simulation to make expert judgments about the safety, security, and reliability of the nuclear weapons. The scaling back of operations and the cessation of nuclear testing led DOE to reduce its workforce by downsizing existing staff and reducing its emphasis on recruiting. The number of defense program workers declined by about 50 percent, from a high of about 52,000 in fiscal year 1992 to about 26,000 in fiscal year 2003. The remaining workers needed to develop skills that were critical to ensure the safety, security, and reliability of the nuclear stockpile without conducting tests. Also, since the United States was no longer designing and producing nuclear weapons, the workers needed to develop new surveillance and maintenance skills to detect potential or actual defects in the aging weapons and replace components to extend the life of the warheads. Currently, the three laboratories report that it takes at least 3 years of specialized training and work experience—and sometimes more for unique jobs, such as safety engineers—for workers to obtain the skills needed to be considered critically skilled. According to the four production plants and the Nevada Test Site, it takes at least 2 years of specialized training and work experience for workers to gain the critical skills necessary to fulfill their mission. As of the end of fiscal year 2003, of the nearly 26,000 defense program workers, 10,186 were classified as critically skilled. Figure 2 shows the total number of defense program workers, as well as the number of workers classified as critically skilled, at each of the facilities as of the end of fiscal year 2003. NNSA and the contractors broadly categorize their workers by the type of work they do. Over 70 percent of all the critically skilled workers fall into the engineer, scientist, or technician categories. The remaining critically skilled workers perform a diverse set of critical job functions. For example, operators operate machines, systems, equipment, and plants for the purposes of producing, destroying, and storing materials and supplies. The tasks operators perform require a high degree of precision, and it often takes several years for operators to achieve proficiency. Professional administrative positions include health physicists, who develop programs to protect personnel from the effects of radiation, and security specialists, who develop, conduct, monitor, and maintain security-related programs. Crafts workers are involved in fabricating materials and equipment and constructing, altering, and maintaining buildings, bridges, pipelines, and other structures. It generally takes at least 2 years of training and education for crafts workers to obtain the hand or machine skills required. NNSA gathers information from the contractors and, twice each year, issues reports on certain characteristics of critically skilled workers, such as age and vacancy rates. NNSA uses this information to monitor the progress of the laboratories in meeting critical skill needs. Table 1 shows the numbers of critically skilled workers by skill area at each facility for fiscal year 2003. By the late 1990s, concerns were raised about the ability of DOE’s contractors to fulfill the goals of the Stockpile Stewardship Program because the workforce had aged, which could potentially leave gaps in knowledge as older workers retired. In response, the Congress created the Commission on Maintaining United States Nuclear Weapons Expertise, commonly known as the Chiles Commission, and mandated that it review ongoing DOE efforts to attract scientific, engineering, and technical personnel; recommend improvements and identify actions to implement these improvements where needed; and develop a plan for recruitment and retention within the DOE nuclear weapons complex. In March 1999, the Chiles Commission reported that the downsizing resulting from the change from weapons production to stockpile stewardship left a considerably smaller and older contractor workforce. Recognizing that the contractors had already lost some of their critically skilled workers, the Commission projected that large numbers of retirements over the next few years could further erode the experience and expertise at the facilities. The Commission warned that unless DOE acted quickly to retain and sharpen the expertise already available and “recruit, train, retain, and inspire an evolving nuclear workforce of great breadth, depth, and capability,” DOE could have difficulty ensuring the safety and reliability of the nation’s nuclear weapons. In addition, the Chiles Commission found that many workers were anxious about job security and the nation’s commitment to the nuclear weapons program in the wake of DOE’s downsizing. This anxiety fostered an unfavorable environment for recruiting and retaining highly skilled workers. In addition, the Commission predicted that recruitment and retention of highly skilled workers would become more competitive because, in general, only U.S. citizens may obtain the security clearances required to work in the nuclear weapons program and contractors faced a shrinking pool of U.S. citizens graduating with degrees in science and engineering, especially compared with the growing pool of non-U.S. citizens graduating with those degrees. Furthermore, the Commission found that contractors needed to identify their requirements for critically skilled workers early because of the time it takes to complete security background checks and for workers to gain the experience necessary through specialized or on-the-job training. As a result of its review, the Chiles Commission made 12 recommendations based on its findings at DOE and its review of industries with similar workforces. Four of the Commission’s recommendations focused on improving recruitment, training, and retention strategies. Specifically, the Commission recommended that DOE and its contractors should (1) establish and implement plans for replenishing essential critical skill workforce needs, (2) provide contractors with expanded latitude and flexibility in personnel matters, (3) expand training and career planning programs, and (4) expand the use of former nuclear weapons program employees. In response to the Chiles Commission report, Defense Programs developed a point-by-point action plan to address each of the 12 recommendations. Since the Chiles Commission report was issued, the contractors for NNSA’s weapons laboratories, production plants, and the Nevada Test Site have developed a variety of recruitment and retention approaches, blending them to meet their specific critical skill needs. These approaches are similar to each others and to those used by organizations with comparable workforces. NNSA has supported its contractors by clarifying the roles and responsibilities of the contractors and providing additional funding to help them recruit workers to fill critically skilled positions. NNSA contractors developed multifaceted approaches to recruiting and retaining critically skilled workers that primarily focus on hiring recent graduates from universities and colleges. These approaches include targeted recruitment activities, educational outreach programs, competitive compensation and benefits packages, and professional development and knowledge transfer programs. Despite the array of initiatives used across facilities, the contractors for the laboratories and for the production plants have used generally similar approaches for recruiting and retaining critically skilled workers. All the contractors reported that, over the past few years, they have refocused their recruiting efforts at universities and other educational institutions to improve their chances of recruiting highly qualified job candidates in an increasingly competitive job market. Whether at a laboratory or production plant, NNSA contractors have done this by establishing recruiting teams to work with the faculty in scientific and engineering departments to attract highly qualified candidates. These recruiting teams generally involve both human resources officials and technical recruiters—scientists, engineers, technicians, or stockpile stewardship program managers with knowledge about the technical needs of the facility. The teams attend recruiting fairs, professional workshops, and other similar events. Some of the technical recruiters said that their involvement enables them to more reliably and quickly assess the job candidates, as well as answer questions the candidates have about specific technical programs. At the laboratories, technical recruiters bring valuable contacts to the recruitment process, having already established working or professional relationships with faculty and students at various colleges and universities. Some of the contractor officials stated that these contacts enable the technical recruiters to evaluate potential job candidates before they apply for jobs. The type and depth of the relationships vary, but many have been built from joint research efforts, adjunct teaching at local universities, or similar collaborations. According to human resource officials, these relationships have proven extremely valuable in identifying and recruiting high quality students for internships, fellowships, post-doctoral appointments, and full-time positions. For example, Lawrence Livermore National Laboratory, which the University of California operates under contract, collaborates with several of the University of California college campuses. The laboratory sponsors and partially funds joint research efforts involving both faculty and students. In addition, many laboratory scientists have access to faculty and students through teaching segments of science or math classes. According to Lawrence Livermore officials, these collaborations have resulted in productive recruitment opportunities. All three laboratories also indicated that their relationships with colleges and universities have served as a key component of recruitment plans. Contractors use these relationships, as well as their reviews of past recruitment successes and comparisons of critical skill needs with course curricula, to target specific colleges and universities for recruitment. For example, on the basis of an initial analysis of its collaborative research efforts, Sandia narrowed its list of places to recruit to 22 universities. The laboratory further prioritized those universities according to four key variables: academic quality, research investment, past recruitment successes, and diversity of students. According to Sandia officials, this approach has allowed Sandia to effectively meet its critical skill needs. Similar to the laboratories, the production plants also use technical recruiters in their efforts to recruit critically skilled workers, focusing on recent graduates from high schools, technical schools, community colleges, and universities. While these recruitment efforts have been fruitful, many of the contractors at NNSA’s production plants and at the Nevada Test Site have also relied on recruiting mid-career workers to fill other critical skill positions because of the level of expertise that these positions need. For example, the Pantex plant has sought out mid-career workers to fill key critical skills positions, such as production technicians. Pantex has partnered with the Amarillo Community College and the Texas Workforce Center to develop a range of technical courses, from 6 weeks to 6 months long, that generates trained production technicians. Most of these trainees are currently employed elsewhere locally. While trainees cover all course costs, Pantex offers each graduating technician an interview for employment, which allows the plant to fill vacancies with the most qualified graduates. As of July 2004, Pantex officials stated that the year-old program has graduated 70 participants and that they plan to hire 24 production technicians this spring. In addition, a manager who works for the contractor operating the Nevada Test Site said that about 15 percent of his new hires must come on board with at least 10 years’ work experience to perform the required work. He noted that, given the demands placed on the contractor to conduct experiments developed by the weapons laboratories and record the resulting data, he cannot always wait the 3 to 4 years required for inexperienced new hires to obtain their security clearances and gain the skills necessary to perform the work. Some production plants have addressed this issue by recruiting at professional or trade association meetings and seeking out experienced workers from other NNSA facilities that are being downsized or closed, such as the Rocky Flats production facility, located outside Denver. The NNSA contractors, primarily the laboratories, provide a wide range of programs including postdoctoral positions, internships, fellowships, and summer employment to attract and develop critically skilled workers. According to the contractors, educational programs at the facilities further the education of participants, increase awareness of the facilities as places of employment, and develop pools of potential job candidates. The contractors reported that these programs are a significant source of new hires. Contractors may offer full-time positions to program participants who already have earned degrees by the time they complete their program participation; program participants without degrees may apply for full-time positions at the facilities after graduating. The laboratories typically hire a greater proportion of graduates with Ph.D. and master’s degrees than the production facilities and the test site do—about 62 percent of the critically skilled workers at the laboratories have postgraduate degrees, in contrast with about 18 percent of the critically skilled workers at the production plants and the test site. Laboratory officials said they offer a variety of graduate-level and post-doctoral programs in an effort to recruit and retain workers with the level of education needed. For example, Sandia offers about 1,200 internships each year, generally evenly split between undergraduate and graduate students. Many of the interns return to the facility for successive internships, allowing them to gain additional skills and creating a pipeline of future job candidates for the laboratory. Generally, the laboratory converts about 15 percent of its interns to full-time positions each year. In addition to internships, Sandia also offers fellowships, sometimes partnering with professional societies such as the National Physical Sciences Consortium, which awards fellowships to U.S. citizens pursuing graduate study in the physical sciences. Similarly, Lawrence Livermore instituted the Lawrence Livermore National Laboratory Postdoctoral Fellowship Program in 1998. The laboratory typically receives 300 to 400 applications a year for three to five fellowships. Since the program’s inception, the laboratory has appointed 15 fellows, 6 of whom have been converted to full-time employees. The laboratory has also hired about 40 other workers who were identified from the fellowship applicant pool. In addition to the shorter term recruiting approach of offering internships and fellowships, the laboratories have also adopted a longer-term strategy for developing candidates to fill future critical skill needs. All of the laboratories offer educational outreach programs that seek to promote basic science, math, and engineering at local middle and high schools. The contractors cited their concern with statistics that show shrinking pools of U.S. students graduating with science and math degrees as a reason for these programs. The programs range from organizing informal school activities to offering specialized curricula, or academies, at local schools. In one program, Sandia partners with professional societies and industry to create a pool of potential technicians in photonics and optical engineering, which are considered critical skill areas at the laboratory and certain industries and include work with lasers, fiber optics, and various optical systems. Sandia’s program begins at the middle school level by exposing students to science and math and encouraging them to pursue careers in those fields. At the high school level, the program recruits the most promising students to participate in the Photonics Academy, which offers a 4-year packaged curriculum and coursework in science and math and the opportunity for an internship at Sandia. Students can pursue their education in photonics and optical engineering at the Albuquerque Technical Vocational Institute or the University of New Mexico. According to Sandia officials, the program has become very successful, and other entities, including the State of New Mexico, have begun to establish similar programs to promote careers in scientific and mathematical disciplines. While the laboratories offer a more extensive variety of internship and fellowship programs, the production plants have also established educational programs that help the facilities recruit critically skilled workers. For example, Pantex has partnered with Texas Tech University and other universities to promote student work programs in an effort to encourage students to pursue educations in areas related to science and engineering, such as mathematics, physics, materials science, and nuclear engineering. Pantex officials said that 40 students participated in the student work programs in fiscal year 2003. NNSA’s contractors at the laboratories, production plants, and test site all cited the opportunity to do a variety of challenging and cutting-edge work as their most important assets in competing against industry to attract critically skilled workers. Many of the facilities, particularly the laboratories, perform work unrelated to the nuclear weapons program for customers other than NNSA. For example, the Los Alamos National Laboratory performs advanced research in such areas as medical technology, genetics, space sciences, and nanoscience, which involves using machines and their components to do research on a molecular level. NNSA laboratories participate in the Laboratory Directed Research and Development program, which allows them to use up to 6 percent of their budgets to fund basic research selected on their scientific and technical merits. Similarly, NNSA production plants can set aside up to 2 percent of their budgets through the Plant-Directed Research and Development program, for basic science research that is competitively awarded in areas to be determined by the facilities’ directors. Contractor officials noted that these research funds have helped to attract and retain workers. For example, the Savannah River plant is using some of its research dollars to fund unclassified hydrogen research. Savannah River officials anticipate that the opportunity to contribute to a growing area of important work will attract new workers and help retain current workers. The contractors also noted that the cutting-edge nature of the work done in the nuclear weapons program, particularly work relating to elements of the Stockpile Stewardship Program, offers many challenges in basic science research that are unique to NNSA facilities. Contractor recruiters said that they use the cutting-edge nature of this work as an incentive to attract workers to critical skill positions during recruitment events. For example, a manager who works for the contractor operating the Nevada Test Site stated that the opportunity to perform sophisticated measurements and capture data during stockpile stewardship program experiments, some of which have never been done before, is a major factor in attracting engineers for critical skill positions. In addition to the nature of the work, all the contractors noted the importance of being able to offer salaries and other forms of compensation and benefits to remain competitive with industry and other government entities for highly skilled workers. The contractors said they have adopted some changes to their compensation or benefits programs as a result of comparing their programs with those of industry. For example, laboratories, production facilities, and the test site have considered such options as bonuses for critically skilled new hires; bonuses to retain critically skilled workers; various forms of bonuses, such as lump-sum payments and stock options; increased base salaries in specialty areas; and awards and recognition programs. To be more competitive in attracting critically skilled workers, some contractors have also begun providing day care facilities, flexible work hours, and fitness centers to improve workers’ quality of life. All of the contractors reported having developed or enhanced their professional development programs and knowledge transfer opportunities in an effort to attract and train new workers, retrain current workers to fill certain critical skill positions, and help retain the current workforce. Most of the professional development programs provide benefits to workers to further their training or education. Some programs may provide an avenue for attending professional workshops or conferences; others may help workers earn a bachelor’s, master’s, or doctoral degree. For example, Sandia offers the One Year On Campus program as a hiring tool for prospective employees. This program allows the employee to pursue a nonthesis master’s degree over an 18-month period. Sandia will pay the full tuition and fees for the degree, as well as paying the participant a partial salary and full benefits during the program’s duration. Similarly, the production plants offer professional development programs. For example, Pantex pays the educational expenses for workers to earn bachelor’s or master’s degrees in areas relevant to work performed at the plant. Workers pursue their degrees through community colleges or long-distance learning opportunities, such as correspondence courses or Internet-based education. Furthermore, Pantex and Amarillo Community College have partnered with Texas Tech University, situated about 2 hours away, to offer evening or weekend courses taught by Texas Tech professors. Pantex also has a fellowship program that allows employees to take leave from work and return to school full-time, while still earning a salary, if they commit to working for Pantex for an agreed-upon time after completing the degree. According to Pantex officials, their professional development program is one key tool used to attract and retain workers. They also noted that many technicians take advantage of the opportunities offered to earn degrees in engineering. In addition, all the facilities offer knowledge transfer programs, such as training programs led by senior workers and mentoring programs. For example, the Los Alamos National Laboratory offers the Theoretical Institute for Thermonuclear and Nuclear Studies program, which Los Alamos officials describe as a 3-year, highly intensive training program taught by senior scientists. According to Los Alamos officials, completing the program is comparable to earning a Ph.D. Although not required as a condition of employment, participating in the program is highly encouraged, and managers see it as an opportunity for workers to improve their technical knowledge. Through Sandia’s Weapons Intern Program, individuals participate in a 1-year technically oriented work study program designed to accelerate the development of engineers and scientists in understanding stockpile stewardship tools, processes, and techniques. Most facilities also offer mentoring programs that pair new hires with senior workers to assist with on-the-job training and other aspects of working at the facility. For example, the Y-12 plant has a mentoring and job rotation program that pairs new hires with senior workers for the first 6 months of employment, during which time the new hires rotate among several job assignments. A second phase of the program identifies technical workers in the early to middle stages of their careers for rotation through assignments to further their professional development. As with NNSA, officials we contacted at organizations with comparable workforces explained that they relied on a mixture of recruitment and retention approaches that best addresses their needs. The approaches they described paralleled those used by NNSA contractors and included focusing their recruitment efforts, providing educational outreach programs, assessing their compensation packages to ensure that they remain competitive, and providing professional development programs. Officials from these organizations utilize strategies to target universities that result in hiring top workers. Many of the organizations described efforts to develop networks with faculty and students, some based on collaborative research programs. For example, officials at the Jet Propulsion Laboratory noted that the laboratory recruits at 52 universities, but selects 20 to 30 each year as the top priorities for their recruitment efforts, on the basis of specific criteria. The criteria include comparing the laboratory’s critical skill needs to course curricula, as well as targeting the universities with which the laboratory has a collaborative research effort. Many of the officials at organizations with similar workforces also mentioned relying upon educational outreach programs, internships, and fellowships as a way of addressing their recruitment and retention needs. These programs can promote interest in basic science and math to younger students at local schools, increase the awareness about employment opportunities at the organization among universities and professional societies, and serve as a means to develop staff who may eventually be hired full time at the facility. For example, an official at the Charles Stark Draper Laboratory stated that the laboratory implements several programs intended to engage students at local schools in math and science activities. One such program allows high school students to shadow employees at the laboratory. The laboratory also offers fellowships and cooperative work programs for students at various universities. Some of the participating students are offered full-time positions at the laboratory once their education is complete. Officials at each of the organizations noted the importance of being competitive in order to attract the workers with the required skills. Most of the officials cited challenging work as one of the key incentives to attract new workers. The officials also cited competitive salaries and benefits as being crucial to recruiting and retaining their workers, and several noted that they compared their compensation and benefits packages with those of competing organizations. The officials cited examples of other benefits that help in their recruiting and retention efforts, such as providing signing bonuses, paying for relocation expenses, and offering recognition and awards programs. Finally, officials at most of the organizations said they use a variety of professional development programs and cited their importance in recruiting and retention efforts. Similar to NNSA facilities, the other organizations have programs that pay for educational expenses for obtaining a bachelor’s or master’s degree. For example, the Applied Physics Laboratory, a division of the Johns Hopkins University, offers on-site master’s degrees through the university in six different subject areas. The information taught in these subject areas directly applies to the Applied Physics Laboratory’s research. Moreover, senior staff at the laboratory are given the opportunity to teach some of the courses. NNSA has supported the contractors’ efforts to recruit and retain their critically skilled workforce in a couple of ways. NNSA has worked with the contractors to clarify the roles and responsibilities of its contractors and has provided additional funding to help the contractors obtain workers to fill critically skilled positions. In response to Chiles Commission concerns regarding systemic problems with DOE management and policies that hindered recruitment and retention efforts, NNSA has reorganized to streamline contract oversight. In December 2002, NNSA reorganized to move its operational oversight from its regional-based operations offices to facility-based site offices. By eliminating its operations offices and setting up site offices, NNSA removed a layer of management and placed the contracting officers, a crucial element of the oversight process, closer to the contractors for which they have oversight responsibility. Also, NNSA consolidated business and technical support functions, including support for human resources and contracting issues, to a single service center in Albuquerque, New Mexico. In addition, NNSA has worked with the contractors to clarify and act on programs proposed by the contractors intended to improve their ability to recruit and retain critically skilled workers. Each contract references DOE Order 350.1 and contains Appendix A, which set forth certain contractor human resource management policies and which describe, among other things, the types of programs that the contractors can charge to the contract. These contract elements lay out the flexibility afforded the contractors in making changes to compensation and benefits programs to be more competitive. Certain types of changes, such as a variable pay program, require approval by NNSA. Many of the contractors acknowledged that NNSA responded quickly to clarify and act on proposed programs. For example, in July 2000, Los Alamos reported to NNSA that it had difficulty recruiting computer scientists and that the turnover rate for these workers was twice that of other workers. Los Alamos proposed to improve its recruitment and retention efforts by increasing the base salaries of the computer scientists and offering other benefits, such as hiring bonuses and relocation expenses. In August 2000, after a series of meetings and correspondences between NNSA and Los Alamos, NNSA approved Los Alamos’ request to increase the base salaries of computer scientiests and to offer them hiring bonsues. However, NNSA denied Los Alamos’ request to approve the relocation benefits. In addition to describing the types of programs that the contractors can charge to the contracts for human resources management programs, DOE Order 350.1 also requires that DOE periodically review contractor studies of how their compensation and benefits programs compare with those of other organizations to ensure the programs are reasonable. In April 2004, we reported that contractor studies regarding benefits did not cover all sites and were inconsistent from one contractor location to another, calling into question the validity and comparability of the results. NNSA officials told us they have contracted with a human resources consultant on a new benefits valuation study. This study compares the laboratories’ benefits against those of market competitors, using such data as pension and health care programs, vacation, and disability. NNSA officials plan to use the results of the study to assess the contractors’ benefits programs, including the reasonableness of benefits and the contractors’ requests for increases in benefits. NNSA plans to commission a second benefits valuation study on two production plants and the test site. Also, NNSA officials indicate that they are working with the contractors to develop a common methodology to assess their compensation programs. NNSA plans to use the results of this analysis to assess the contractors’ compensation programs, including the reasonableness of the programs and the contractors’ requests for increases in compensation. In recognition of the need to ensure contractors can meet their critical skill requirements, NNSA has provided additional funding for the three laboratories through the Laboratory Critical Skills Development Program. This program is designed to encourage the laboratories to identify projected gaps in critical skills and develop programs that attract potential candidates at an early age to fill those gaps. The program is also designed to be flexible, allowing the laboratories to submit proposals to NNSA for the funds. The proposals vary considerably, some targeting middle school or high school, while others target college-age students. Some of the proposals include summer school opportunities, internships and fellowships, or more formal education programs in high school or college. NNSA provided $4.35 million for fiscal year 2004, a decrease from fiscal year 2003 funding of about $0.23 million. The program also requires that the contractor running each laboratory match NNSA’s funding on a one-to-one basis and track success of the program. An official at Sandia reported that, at first, line management did not support the Critical Skills Development Program, particularly because of the matching funds requirement. However, the program has become very successful and is seen as a means of recruiting critically skilled workers at a lower cost than in the past. In fiscal year 2003, Sandia converted 20 student participants to full-time staff from such programs as College Cyber Defenders Institute, Microsystems and Engineering Sciences Applications Institute, Materials Science Research Institute, and National Collegiate Pulsed Power Research Institute. According to Sandia officials, the Laboratory Critical Skills Development Program has become so popular that, collectively, line managers fund their share of the program at 2.5 times NNSA’s one-to-one matching requirement. The efforts of NNSA’s contractors to recruit and retain a critically skilled workforce have been generally effective, according to our analysis of the contractors’ data, our review of the contractors’ workforce planning processes, and information gathered from stockpile stewardship program managers. Contractors’ data on critical skill positions indicate that the eight facilities have experienced low turnover rates and that the average age of critically skilled workers is expected to remain steady or decrease at almost all of the facilities. The data also demonstrate that most facilities have been hiring at a level sufficient to offset current and anticipated attrition. However, some facilities have limited or no data available on the number of new critically skilled workers hired because their method of organizing their critically skilled workforce, which is different from the ways the other facilities organize these employees, makes data on new hires difficult to collect. Contractors’ data show that turnover rates for critically skilled workers have been low. Of the 10,186 positions across the eight nuclear weapons facilities classified as critically skilled as of the end of fiscal year 2003, only 2 percent were vacant at any point during the year. From fiscal years 2000 through 2003, the turnover rate for critically skilled workers across facilities—including both retirement and non-retirement related job termination—was 3.92 percent. The highest turnover for this time period was 5.35 percent at the Nevada Test Site, and the lowest turnover was .067 percent at Savannah River (see fig. 3). The average age of critically skilled workers across the complex has remained relatively steady since the end of 2001, at approximately 47 years of age. According to NNSA program managers, this is the result of a steady increase in the average age at the production plants, and a counterbalancing steady decrease at the laboratories and test site. For example, among the laboratories, NNSA has projected the trend in the average age of critically skilled workers to be decreasing for Sandia and Los Alamos, and remaining flat for Lawrence Livermore through 2005. Among the production plants, NNSA is projecting that the average age will decrease at Kansas City, increase at Savannah River, and hold steady at Pantex and Y-12 through 2005. NNSA also projects the average age at the Nevada Test Site to be decreasing through 2005. While the overall average age across the nuclear weapons complex has been holding steady, NNSA program managers believe that the average age will decrease starting in 2006, when staff at or beyond retirement age who had remained at the facilities to, among other things, train newer workers in critical skill areas, begin to leave the facility. Table 2 shows that for fiscal years 2000 through 2003, five NNSA facilities for which data were available hired, on average, 94 percent more critically skilled staff than they needed to replace because of retirements or other separations (i.e., for every one critically skilled worker who separated, the facilities hired 1.94 people). These facilities adopted this hiring pattern to maintain the critically skilled workforce needed to fulfill the current mission of the Stockpile Stewardship Program, to make up for past hiring shortages, and to proactively plan for the next 10 years, when as much as 39 percent of the current workforce is or will soon be eligible to retire. The human resource managers at many of the facilities stressed the importance of bringing in new staff early enough to take advantage of knowledge transfer opportunities before more experienced workers retire. While most of the critically skilled hires have advanced degrees, these workers often require additional, job-specific training because of the specialized and often classified nature of work in the nuclear weapons complex. Furthermore, the managers pointed out that it has been taking 1 to 2 years on average for new hires to obtain security clearances. In order for the new hire to be cleared and trained to take on critical work when the experienced staff member leaves, it is useful for the new staff member to be hired 2 to 3 years ahead of the retirees’ anticipated departure. To compensate for recent attrition, an aging workforce, and an increasing number of critical skill positions, the Nevada Test Site has been hiring at a greater rate than any of the other facilities in recent years—2.71 new staff were hired for every 1 that left. With an average retirement age of approximately 62 and approximately 30 percent of its workforce over the age of 55, the facility has been planning ahead to replace staff who are expected to retire in the near future, according to site officials. They believe the rate at which the facility has been hiring will ensure that the workforce will maintain the skills necessary to complete its mission. Sandia has also been hiring aggressively over the past 4 years, bringing in almost twice the number of critical skill hires needed to replace critical skill workers who separated during that period. According to human resource officers, the laboratory had recently fallen behind in its efforts to replace critically skilled workers who retired or left the facility for other reasons. As a result, Sandia embarked on an aggressive hiring effort to replace these needed critical skill workers. Human resource managers stated that their efforts have been successful and that future hiring will more closely match the number of separating critical skill workers, assuming no significant programmatic changes. Y-12 has also hired almost twice the workers needed to replace those who left. Human resource managers at Y-12 stated that they decided to take this course of action so that the facility would have the necessary critically skilled workers in place prior to the anticipated retirements of experienced workers. The average retirement age for critically skilled workers at Y-12 is approximately 59 years, and 364 of its critically skilled workers as of fiscal year 2003, or about 23 percent, are over the age of 55. Furthermore, human resource managers stated that the job market for many of the critical skill positions required at Y-12 is relatively good at the moment; therefore, the facility is trying to stay ahead of the perceived future market crunch by bringing these highly skilled workers into Y-12 now. Similarly, Kansas City has recently been hiring more critically skilled workers than needed to replace those leaving—1.78 workers for every 1 departing. According to human resource managers, this hiring was done so the facility would be better positioned to meet its future needs in the Stockpile Life Extension Program—a component of Defense Programs that is focused on maintaining and refurbishing existing nuclear weapons. Because of anticipated retirements in the next decade, the contractor has estimated that it has a 5-year window in which to ensure that essential knowledge gets transferred from experienced employees to newer staff so that the facility will be able to fulfill the program’s mission. Pantex has also been hiring at elevated levels—bringing in 1.65 new hires for every employee who left—in order to replace departing critically skilled employees due to retirements or other separations and to accomplish its expanding work. For example, in fiscal year 2007, the facility will need to increase its cadre of critically skilled technicians and operators by over 50 positions, which reflects a growth of about 4 percent, when it takes on new responsibilities in the Stockpile Life Extension Program. While these facilities reported that they generally have the critically skilled workers needed, they pointed out that isolated gaps exist for specific positions at some facilities and efforts are ongoing to fill these openings. For example, Y-12 mentioned the high market demand for metallurgists and fire protection engineers as posing a challenge in hiring for these positions. The Nevada Test Site, whose turnover rates have exceeded 10 percent in 1 year, has had difficulty retaining the number of critically skilled crafts people needed, in part because it is competing with the building construction industry in Las Vegas. The site officials we interviewed said that they are continuing to overcome these workforce challenges and believe they will be able to fill these positions in the years to come; however, they acknowledge that this will require extra effort and emphasis. Lawrence Livermore, Los Alamos, and Savannah River generally bring in new staff to supply a pipeline of qualified workers to fill future critical skills openings, rather than assigning new hires to fill a specific current or future critical skills position. Because of this, these facilities have little or no information on the number of new critically skilled workers hired. For example, Lawrence Livermore is organized as a matrix system in which a worker’s designation as “critically skilled” changes depending on the work he or she is doing and the amount of time spent doing that work. Lawrence Livermore defines a position as a critical skill position, in part, by the amount of time the worker spends doing Defense Program work, with the minimum requirement for this designation being 25 percent of the time. At any given point, there is a core set of Defense Program positions classified as “critical skills” positions and a set of workers filling those positions. However, there are also a number of other workers with skills that would qualify them for a critical skill position, but who are presently doing work for other missions of the laboratory. While not categorized as “critically skilled” at that moment, these workers are able to fill critical skill position openings when they arise and provide depth to the pipeline of qualified critical skill workers. The arrangement is somewhat similar at Savannah River, where workers are hired into the facility’s pipeline of employees who can fill critical skill positions when needed. In fiscal year 2003, there were 698 Defense Program workers at the facility, 98 of whom were categorized as critically skilled. When any of these workers leave, Savannah River fills the opening with a worker possessing the needed critical skills, but who has been working in another area of the facility. Los Alamos also depends upon its internal pipeline to a great extent to fill critical skill needs, as well as conducting limited hiring of new staff from outside the facility. In fiscal year 2003, Los Alamos filled 631 critical skill positions. Of these, 550 were filled via development of internal candidates, with the remaining 81 being hired from outside this internal pipeline. This distribution is partly reflected in the data showing that Los Alamos had hired fewer new staff to fill critical skill positions than had separated from the facility (i.e., 0.79 new workers hired for every 1 separating). While NNSA defines the mission of each facility, the contractor is responsible for determining what resources are needed to meet that mission, including the type and number of critically skilled workers needed. To ensure that they will be in a position to meet their future critical skill needs, all eight nuclear weapons facilities have incorporated to some degree into their planning processes the five key principles we have identified as essential to strategic workforce planning: (1) involving management and employees in developing and implementing the strategic workforce plan, (2) determining critical skill needs through workforce gap analysis, (3) developing workforce strategies to fill gaps, (4) building needed capabilities to support workforce strategies, and (5) monitoring and evaluating progress in achieving goals. (See fig. 4.) All of the 20 stockpile stewardship program managers said that they are involved with workforce planning to at least a moderate extent at their facilities. Their involvement encompasses a variety of activities, including identifying current and future critical skill needs, identifying and recruiting candidates for employment, and helping to retain current employees through training and mentoring. All of the eight NNSA contractors undertake annual reviews of their critical skill needs, with managers playing a key role in this assessment. For example, managers at Sandia, as part of the facility’s annual Strategic Capabilities Assessment, are responsible for identifying workforce skills required and developing projections of the number and type of staff needed to meet their mission. These managers are also asked to identify any increases or decreases in future staffing levels that may result from programmatic changes and any staff who may require specific training to ensure they will be prepared to handle upcoming segments of work. The information gathered from the managers during the Strategic Capabilities Assessment process is used to develop a facilitywide hiring plan that ultimately guides Sandia’s recruiting efforts. At Los Alamos, managers prepare annual workforce reviews that identify present and future capabilities of the workforce, including critical skills. These reviews provide an opportunity for managers to identify both strengths and gaps in the capabilities needed to achieve programmatic missions. Each review considers, among other items, projections of upcoming retirements, succession planning, recruitment goals and approaches, plans for replacing the lost skills, and mentoring and training needs. While these workforce reviews are comprehensive and help map out workforce needs, they have not yet been used to develop an overall hiring plan for Los Alamos; however, human resource managers said that they plan to begin to do this by the end of fiscal year 2006. In addition, almost all of the 20 stockpile stewardship managers said that they participate in recruiting efforts to at least a moderate extent. Program managers identify the critical skill positions needing to be filled; recruit on campus; and interview prospective candidates when they visit the laboratory, production plant, or test site. For example, division managers at the Kansas City plant identify critical skill needs and process the necessary request forms to fill those needs. These requests are aggregated by the human resource department and used to inform the plant’s college recruiting efforts where applicable. Managers are also involved in campus recruiting. For example, Y-12 sends line managers, not human resource personnel, out to campuses to recruit. This helps Y-12 develop a better relationship with the schools and faculty and provides students with an opportunity to interact with the managers with whom they may one day be working, according to the human resource officials. Managers also play a primary role in interviewing candidates who visit their facility, and many make the final selections. For example, as part of Los Alamos’ just-in-time recruiting efforts, prospective candidates visit the laboratory and undergo a day of interviews with different program managers for multiple positions. At the end of the visit, the managers decide which of the candidates are best suited to fill the needs identified and make them offers. Management involvement with workforce planning also continues after the candidate is hired, through involvement with training and mentoring programs that help ensure the facility will be able to retain the critically skilled employees needed. For example, Pantex offers in-house educational and other programs to promote continuous professional development and improvement of the plant’s knowledge base. New personnel at Pantex receive training to qualify as production technicians from seasoned employees at the facility, after which these newly trained technicians are assigned to work under the direction of other experienced personnel who continue with on-the-job training. In addition, some the facilities place a premium on the value of mentoring as a means to ensure that needed knowledge transfer takes place. For example, at Savannah River, once scientists and engineers reach a certain level of management, they are required to mentor newer staff in order to be considered for any future advancement. At Lawrence Livermore, formal and informal mentoring by experienced personnel, including retirees, is a key part of the learning process. These mentoring activities include reanalysis of past nuclear events and comparisons of the effectiveness of different experiments. According to contractor management, in some cases, these exercises have been able to produce fresh insights for the entire program. Lawrence Livermore, as well as some other facilities, relies upon a corps of retired workers to pass along knowledge to newer staff and to archive their knowledge through documentation and videotaped interviews to preserve it for future generations of workers. All NNSA facilities have analyzed their workforce, including assessing the skills of the current employees, identifying the current and future critical skills needed, and determining if and where any gaps exist. For example, associate directors at Lawrence Livermore, with the assistance of human resource staff, annually evaluate employee capabilities, look at what resources are needed to support existing and anticipated future programs, perform a gap analysis, and develop projections. In doing this analysis, the associate directors consider the approximate attrition based on recent trends, the number of employees the laboratory would like to have as backup to meet anticipated critical skill needs for the next 3 years, anticipated future needs based on likely program changes and budget projections, and the training and mentoring needed for staff to be prepared to fill anticipated critical skill needs. Program and human resource managers at Pantex conduct detailed workforce planning annually to ensure that the needed skills are available at the right time as workload and demographic changes occur. As part of this planning process, the managers analyze positions to ensure that they are properly designated as critical or noncritical. The workforce planning team at Pantex then works with managers across the organization to determine the number of critically skilled workers needed in each area to meet the projected workload over several years. In doing so, the planning team considers new work and skills that may be required in the future. The workforce planners obtain data on the skills of the current employee workforce, review production estimates to determine what workforce skills will be needed to meet production goals, and assess the degree to which the skills required by the future workload line up with the current baseline of critically skilled employees. This analysis is rolled up and reported in Pantex’s annual “Critical Skills Program Status Report.” According to human resource managers, this report serves as the basis for planning for, filling, and maintaining critical skills in the future. At Y-12, workforce planning can be broken down into two categories based on the planning horizon—long range or near term. Long-range workforce planning includes developing a 10-year comprehensive site plan, which is revised annually and contains a brief discussion of the workforce needs. Long-range planning also involves developing a 10-year baseline plan, which breaks down in more detail the information contained in the comprehensive site plan, including the specific number of workers needed for each position for the next 10 years. Finally, each suborganization within Y-12 prepares workforce planning reports that ultimately get rolled up to form the facilitywide workforce plan. Near term planning involves creating a workforce plan that includes the production schedule for the next 3 years, including the staff levels needed to meet production goals. These workforce plans are reviewed three to four times annually to ensure that adequate resources are available and that workforce capacity is appropriate to meet near-term workforce needs. Management conducts a gap analysis on these short-term estimates and determines what skills are needed. In addition, the Y-12 plant conducted a reorganization process over the past year that has compared the skills of the current workforce with the future needs of the facility and moved people around accordingly. Once the reorganization is complete, the plant will again reassess the workforce and skills needed to identify any remaining gaps or shortages in workforce skills. As gaps between the skills of the current workforce and the skills needed to fulfill the mission are identified, each facility has developed strategies to address these specific gaps. Because each facility is unique in mission, geographic location, and required skill sets, there is no standard approach that the facilities can use to address their gaps. Rather, each has developed strategies that help them identify, recruit, and retain the critically skilled employees needed at each facility. For example, all three laboratories, as part of the Laboratory Critical Skills Development Program, have initiated a series of projects or institutes that provide training and research experience to precollege, undergraduate, and graduate students in critical skill fields relevant to the laboratory. One such institute in use by all three laboratories—the College Cyber Defenders Institute—is focused on addressing the national shortage of trained people and lack of formal university programs that prepare students for a career in cyber security. Other programs are focused more specifically on the needs of a particular laboratory. For example, the Computer System Administrator Development Initiative at Los Alamos is designed to recruit students who are enrolled in area colleges and universities and who want to develop their skills as a computer systems administrator, a critical resource need at the laboratory. The production plants have also implemented strategies that directly address some critical skill needs identified by their workforce gap analyses. For example, to help meet its need for production technicians and to ensure that candidates under consideration for this position have the basic technical skills that will be transferable to various weapons programs once they are hired, Pantex requires that each candidate successfully complete a Pantex Job Skills Development Program available through the local community college. A similar strategy is under way at Y-12. To help address its need for nuclear engineers, Y-12 has been working closely with South Carolina State University to develop a nuclear engineering program that will enable the university to supply Y-12 with needed graduates in this critical area. Y-12 managers currently serve on the university’s advisory board and visit the campus several times each year to work closely with faculty in developing the program. The Nevada Test Site finds itself in a unique position regarding its workforce planning activities. The site’s primary mission was to be able to conduct underground nuclear tests to ensure the reliability of the nuclear stockpile. However, with the 1992 moratorium on testing and a Presidential directive that the United States must be able to resume nuclear testing with as little as 18 months’ notice, the Nevada Test Site is faced with the unique challenge of maintaining testing skills without being able to conduct actual nuclear tests. To help maintain the critical skills required for testing, the site has a workforce of engineers to support the three weapons laboratories in research and development engineering on advanced diagnostic tests and in their subcritical tests (tests that do not produce a nuclear reaction) associated with the Stockpile Stewardship Program and other special projects. This allows the Nevada Test Site to maintain critical diagnostic skills related to testing and evaluate the strengths and weaknesses in testing methods they will need to apply in the event that the facility begins testing again. In addition to on-the-job training to maintain diagnostic skills among engineers, the Nevada Test Site is developing a set of training classes geared toward junior employees. These classes will be led by the engineers who used to develop diagnostics at the site for underground testing and will help preserve critical knowledge about testing from being lost. Each of the nuclear weapons facilities increased their capabilities to support the critical skill workforce planning strategies by augmenting administrative support to implement these strategies, utilizing technological planning tools, and increasing the use of educational and financial incentives that help recruit and retain critically skilled staff. For example, Sandia has established a Nuclear Weapons Strategic Management Unit to manage the work, products, processes, and people needed to accomplish the laboratory’s mission. Similarly, the Planning, Scheduling, and Integration division at Pantex oversees all workforce planning tasks and has created a detailed flowchart defining how workforce planning takes place at the facility and at what stages different stakeholders, such as division managers or human resource officials, become involved to ensure that the critical skill needs are being met. Alternatively, Los Alamos is piloting a program in one of its divisions that is designed to enhance the retention of critically skilled staff. As part of this program, a human resource manager is deployed full time to a division at the laboratory to help ensure that the students participating in the internship or co-operative program are given a high quality experience, increasing the likelihood that they will want to stay on at the laboratory full time after graduating. This specialized human resource manager also helps ensure that the student will be a good fit for the laboratory in the long run and worthy of continued investment and training in critical skill areas. Some facilities have also used technology to better link critical skill planning to hiring activity. For example, Sandia developed a Web-based application called the HR Graphalyzer that enables the human resource personnel to analyze human resource data graphically. One component of this application, the Enhanced Staff Planning Tool, helps the divisions design a hiring program that more accurately represents facility needs, factoring in the specific organization’s current headcount, current age and years of service distribution, and history of internal employee movement. The application is able to project separations (both retirement and nonretirement) on the basis of historical trends using the organization’s age and years of service distributions. It also equalizes hiring over 2 years in order to avoid swings in recruiting and hiring efforts from year to year. One of the main benefits of this system is that it helps the human resource department to more accurately identify the number of workers with specific skills who are needed. In addition, some facilities use human resource information databases to help them better manage the flow of the critical skills workforce. For example, Pantex uses a database that maintains information on individual skills of the current staff and whether the individual currently fills a critical skill position. This database is updated annually and critical skill positions are reviewed in light of the current mission and workload, helping the facility ensure that it is meeting those workforce and mission needs. Contractors have also used technology to assist with their recruiting efforts. For example, Kansas City implemented a Web-based resume-sourcing tool that allows the facility to post detailed job descriptions of open positions on the Internet, increasing exposure to prospective candidates. Some facilities have offered incentives to help recruit and retain critically skilled staff, including offering educational programs and providing workplace flexibilities. For example, Kansas City’s Technical Fellowship Program is an internal program designed to train and develop associates for future critical skill positions. The facility has also identified a number of workplace flexibilities that have helped it remain competitive in attracting new staff, including signing bonuses, retention bonuses, and employee referral fees. Furthermore, the Kansas City plant has provided housing stipends to student interns who were relocating to the Kansas City area. The Y-12 facility has adopted similar strategies, focusing on employee development through companywide training, education, and job rotation programs that help the new hire get wider exposure and training to different aspects of the facility. Y-12 also recently modified its relocation policy and now provides new hires with an up-front sum of money to help with moving expenses. The intent of this change was to help the facility stay competitive with other industries in the area. Similarly, Lawrence Livermore has tried to stay abreast of market trends and has offered incentives to successfully compete for employees including hiring bonuses, employee referral bonuses, relocation packages, and benefits and compensation packages. Furthermore, it has instituted work-life programs such as flexible schedules, expanded day care facilities, and other on-site services, including dry cleaning and a fitness center, to keep Lawrence Livermore an appealing place to work. NNSA has continually monitored and evaluated contractor progress through annual, semi-annual, and monthly reviews. As part of their annual Performance Evaluation Plans in fiscal year 2004, each NNSA facility was responsible for meeting one or more performance measures related to critical skills management. Before the start of each new fiscal year, NNSA and the contractor negotiate these plans, which establish the expectations for the coming fiscal year and serve as the basis for evaluating how well the contractor has met the goals of the contract. At the end of the fiscal year, NNSA prepares a Performance Evaluation Review, evaluating the contractor’s performance on the objectives set out in the Performance Evaluation Plan. This final overall assessment of how well the performance evaluation measures were met, including those dealing with the critical skill needs at the facility, provides the basis for any financial awards given to the contractor. (See app. II for a summary of the performance measures related to critical skills for each facility.) Semiannually, each facility reports to NNSA headquarters on a set of predefined metrics related to recruiting and retention. Among the metrics used to assess performance are the number of job offers and acceptances for critical skills positions, age statistics for the current critical skills population, and percentage of critical skill positions vacant. Retention indicators include attrition rates of critical skill employees as compared with other Defense Program employees and total number of departures of critical skills employees. According to NNSA Office of Defense Program officials, two of these performance metrics—average age of the critical skills workforce and the percentage of critical skill vacancies—are good indicators of the overall success of the contractors in recruiting, developing, and retaining critical skills employees. On a monthly or quarterly basis, the contractors report to NNSA representatives metrics related to meeting critical skill needs. NNSA site managers conduct monthly reviews with the contractor managers to check progress on meeting the performance objectives that have been laid out for the contractors and briefly discuss the status of critical skill positions. During these meetings, the contractors are also free to discuss any other issues adversely affecting their ability to reach their critical skills goals, such as concerns about clearance delays or salary competitiveness. While many of these metrics tracked each month may be based on the actual performance measures established under the contract, others may be tracked because of their close connection to maintaining a critically skilled workforce. For example, at Y-12, reports are issued monthly on an established set of performance metrics for hiring, retention, and turnover, but also for college recruiting plans, career fair activities, and information regarding involuntary reductions-in-force at other facilities and how these reductions affect hiring at Y-12. In addition to monitoring contractors’ overall progress in meeting critical skill performance measures, some facilities also track progress on specific programs designed to help recruit and retain critically skilled workers. For example, all the proposals for projects initiated through the Laboratory Critical Skills Development Program include milestones, goals, objectives, success measures, and evaluation criteria. Follow-on funding for these projects is dependent on how well these criteria are being met. In addition, NNSA monitors how well money spent as part of the Plant Directed Research and Development program is helping retain critically skilled staff at production plants. Of the 20 stockpile stewardship program managers we interviewed at the eight NNSA facilities, 15 believe that their critically skilled workforce is currently sufficient to fulfill their facility’s mission. Although this belief was widely held, the factors these managers cited as helping their facility achieve a sufficient critical skills workforce varied. Among the factors most commonly cited were the strength of the recruiting programs, the quality of work performed at the facility, and their facility’s commitment to training and development. For example, of the managers who believe their critically skilled workforce is currently sufficient, some commented that the recruiting programs are effective in attracting highly qualified new and experienced workers to their facilities. They also mentioned that their facility performs work that is technically challenging, interesting, and of national importance, making it an appealing place to work. These managers also said that their facilities have made a commitment to training and development, ensuring the transfer of knowledge from experienced employees to new workers and allowing many staff to be trained in areas of critical importance to the mission. Some of the five managers who felt the current critical skills workforce was insufficient to meet their facilities’ missions expressed concern that the current pool of qualified, technically trained candidates is inadequate to meet the facility’s specific needs. At some sites, the candidates with specific skills and training are simply not readily available in the market and managers commented that there are few students entering professions applicable to certain critical skill needs. Stockpile stewardship managers we interviewed were less confident overall in their facility’s ability to fulfill its future critical skills needs; however, the majority still felt the critical skills workforce would be sufficient over the next 10 years to fulfill the facility’s mission. Twelve of the 20 program managers we spoke with believe their critical skills workforce would be sufficient in the future, 3 believe it will be insufficient, while 5 were unsure. The 12 managers who felt their critical skills workforce would be sufficient over the next 10 years cited a number of contributing factors, including the exciting mission of the facility, the strength of the recruiting programs, and a focus on training and developing employees. In addition, a number of these managers also mentioned the facility’s workforce planning efforts as essential. For example, one manager said her facility’s efforts enabled the managers to identify, understand, and plan for future critical skill needs. Of the three managers who felt that the critical skills workforce will be insufficient to fulfill the mission over the next 10 years, two mentioned concerns about the budget and the likelihood that a substantial number of critically skilled workers would retire in the next 10 years. These managers said that budget shortfalls would make workforce planning difficult. Budget limitations can affect the number of staff who can be employed at any one time, limiting the amount of knowledge transfer that can occur between experienced staff and those new to the facility, both of whom might be holding the same position while this training and development takes place. In addition, because it can take as long as 5 years for new staff members to receive clearances and be fully trained on the critical elements of their jobs, the impending retirements could influence the transfer of critical knowledge if these experienced workers retire before new staff are brought in and trained in these skills. The five remaining managers expressed uncertainty about whether their facility would be able to maintain the critical skills workforce needed in the future. Most expressed guarded optimism that their facility would be able to find the needed skilled workers; however, they mentioned a number of factors that could still hamper their ability to do so. In addition to the budget uncertainty and impending retirements mentioned by other managers with concerns about future workforce preparedness, some of these managers cited uncertainty about their facility’s future mission and a shrinking pool of qualified candidates to fill future openings. One manager commented that shifts in the mission, such as a reduction of laboratory-directed research and development being done, could limit the amount of exciting work being performed at the facilities, making employment there less appealing to potential candidates and, consequently, planning for future skill needs more difficult. The five managers also expressed concern about the availability of technically trained workers to fill future critical skills positions. According to one manager, competition remains high for certain graduates with particular training and educational background and whether this competition will continue in the future is uncertain. NNSA contractors face ongoing challenges in recruiting and retaining a critically skilled workforce and are using a number of strategies to mitigate them. Additionally, organizations with comparable workforces are facing similar challenges and are using similar strategies to mitigate those challenges. Beyond their challenges, NNSA contractors face future uncertainties that could affect their ability to recruit and retain a critically skilled workforce in the years ahead. NNSA contractors most commonly cited four challenges to recruiting and retaining a critically skilled workforce: the amount of time it takes to obtain security clearances, a declining pool of potential employees, the undesirability of certain facilities because of the area’s high cost of living, and the undesirability of certain facilities because they are in locations that many potential new hires consider unattractive. Table 3 shows the facilities that cited each of the four challenges and provides sample strategies that some NNSA contractors are using to mitigate them. Regarding the time it takes to obtain security clearances, most of NNSA’s contractors said Q-level security clearances—the level needed for most critical skills positions—have been taking from 1 to 2 years to process, delaying new employees’ ability to obtain on-the-job training for the classified work for which they were hired. To obtain new employee clearances, contractors submit background paperwork to the Albuquerque Service Center for processing. The Service Center uses the Office of Personnel Management (OPM) to conduct background investigations. The Service Center reviews the investigation files provided by OPM, conducts follow-up interviews when necessary, and makes the final decisions on clearances. The manager of NNSA’s Personnel Security Division at the Albuquerque Service Center, who has responsibility for security clearance issues, said that Q-level clearances have been taking, on average, just under 1 year; however, he also acknowledged that in some cases these clearances are taking as long as 2 years to complete. Because an employee without a clearance cannot take part in classified work, the extended clearance reviews have delayed the beginning of the employees’ on-the-job classified critical skills training needed to be designated as fully critically skilled. Without this training, employees cannot begin doing the work for which they were hired. According to contractor human resource officials and stockpile stewardship managers, employees can become frustrated and discouraged in the face of these delays. While NNSA contractors are not able to directly address the time it takes to process security clearances because responsibility for the investigation and final determination lies elsewhere, they have developed strategies to mitigate the effects of these delays. Several contractors stated that they try to reduce the negative effects of waiting for clearances by providing new employees with meaningful, unclassified work. For example, Y-12 offers a program in which new employees waiting for a clearance can rotate through areas of the plant that do not require a clearance, learning about different departments that relate to the job they were hired to perform. The Savannah River plant plans to place new employees waiting for a security clearance in its soon-to-be-completed hydrogen technology laboratory being developed at a nearby site. Although the laboratory was not built specifically to address clearance delays, working there will benefit staff waiting for clearances by (1) providing an avenue for them to gain experience in areas relevant to the work they will perform once the clearance is obtained and (2) allowing them to work on cutting-edge projects in an unclassified setting. This opportunity should help reduce some of the frustration newly hired employees and the facility management associate with clearance delays. Some facilities have also taken advantage of downsizing at such DOE facilities as Rocky Flats. By hiring critically skilled employees who have Q-level clearances, the facilities can both avoid the delays associated with the security clearance process and retain critical skills already within the nuclear weapons complex. While hiring these experienced NNSA contractor employees has been useful, their numbers will decrease when Rocky Flats is closed and the downsizing of other NNSA facilities is completed. In addition to the amount of time it takes to obtain security clearances, most of NNSA’s contractors also face the ongoing challenge of recruiting from a declining pool of technically skilled potential employees. First, contractors said this pool has shrunk because fewer students with U.S. citizenship are seeking advanced degrees or technical training in areas such as science and engineering. Because most critical skill positions require a Q-level clearance and U.S. citizenship is a primary consideration for such a clearance, NNSA contractors must locate U.S. citizens with the needed critical skills. Second, some contractors said they face a lack of qualified technicians in specific skill areas. For example, Pantex mentioned having difficulty finding enough qualified production technicians, and Los Alamos cited difficulties in finding skilled technicians who have nuclear weapons manufacturing experience or who are trained in using radiological gloveboxes—sealed containers that feature built-in gloves for handling radiological material. To address the declining pool of technically skilled workers, most of the facilities have developed programs to attract U.S. citizen students earlier in their high school and undergraduate years and encourage them to pursue careers in technical fields. For example, under the Laboratory Critical Skills Development Program, the three weapons laboratories have established programs involving local high schools and universities that encourage students to pursue college and advanced degrees in technical fields and provide a pipeline of workers for future job opportunities at the laboratory. Additionally, some contractors have focused on improving their relationships with universities as a way to address the challenge of a declining pool. Sandia, for example, established its Campus Executive Program to develop a more coordinated and comprehensive recruiting effort at targeted universities. The program’s recruiting teams, composed of researchers, recruiters, program alumni, and affiliated faculty, use existing relationships at colleges and universities to attract technically trained U.S. citizens. Contractors have also found a source of critically skilled employees in other NNSA programs operating at the same site. For example, the Defense Program division of Savannah River has been able to pick up critically skilled employees from the facility’s Environmental Management segment as it is being shut down. The advantage of hiring these staff is that they are already trained in critical skill areas relevant to Defense Program work and have their security clearances, which allows them to hit the ground running. Finally, some contractors have developed their own training programs to meet their facilities’ specific skills needs as a way of mitigating the shortage of qualified technicians in needed skill areas. For example, Pantex established its Job Skills Development Program in partnership with Amarillo Community College and the Texas Workforce Center to help meet the facility’s need for production technicians. The program trains and qualifies a local workforce of production technicians from which Pantex can recruit potential employees. Like Pantex, Los Alamos has developed a program to address one of its specific needs. The Glovebox Technician Pipeline Program develops college-educated technicians with basic skills in radiological glovebox technology. Los Alamos began the program in 2003 and expects it to produce a small pool of technically skilled graduates available for full-time employment. The third ongoing challenge cited by some NNSA contractors is that their location is geographically undesirable because of the high cost of living. Some contractors stated that the high cost of living in the area where their facility is located makes recruitment and retention of critically skilled employees more difficult. This was the case with Lawrence Livermore, which is located in the San Francisco Bay area. To address cost of living issues, contractors have used such employee incentives as offering signing and relocation bonuses to assist with relocation expenses. Some contractors have also offered potential employees support by helping them find housing or learn about the community. The fourth ongoing challenge cited by NNSA contractors is that their location hinders recruiting efforts because it is perceived as being an unattractive area to live in or as being remote. For example, contractor officials at the Pantex plant in Amarillo, Texas, report that they have trouble recruiting and retaining critically skilled workers. Pantex officials indicated that some workers are attracted to larger urban environments. To address concerns about locations that may be perceived as being unattractive, some facilities have also offered signing and relocation bonuses. Other facilities focus on recruiting individuals who are from the local area. For example, Y-12 targets universities in the surrounding geographic area because candidates are more likely to accept positions near where they live or attend school. Similarly, Pantex focuses on universities in west Texas, Oklahoma, and New Mexico to recruit engineering candidates. As NNSA contractors have developed strategies to mitigate recruiting and retention challenges, they have used a variety of methods to share those strategies among themselves. One method for sharing information is to use the human resource specialists at the Albuquerque Service Center as a conduit. NNSA recently consolidated most of its contractor human resource staff, who were previously located at each of the NNSA facilities, in a central location at the Albuquerque Service Center. Six of the eight nuclear weapons complex facilities currently have human resource specialists located at the service center. Because the specialists are now centrally located, they are able to obtain a broader perspective by taking advantage of each others’ knowledge about the activities of different contractors. For example, Los Alamos adopted a tool from Lawrence Livermore—a “deliverables matrix”— that is used to help track the reports it submits periodically to NNSA on a number of subjects, including critical skill management. The Los Alamos contractor learned of this tool from its human resource specialist, who learned of it from his Lawrence Livermore counterpart at the service center. In addition to using the service center as a conduit for sharing strategies, NNSA contractors are using a variety of other avenues. For example, contractors exchange ideas at periodic meetings such as the annual compensation managers meeting and DOE’s annual human resource conference, which features sessions dedicated to discussing critical skill recruitment and retention and sharing best practices. Moreover, NNSA’s plants participate in quarterly meetings, which allow them to discuss lessons learned in recruiting and retaining critically skilled employees. Partnerships among the facilities also promote strategy sharing. For example, Los Alamos and Lawrence Livermore coordinate their recruiting efforts under the Recruitment Coordination Cost Efficiency Initiative. In addition, the four production plants have developed a Senior Scientist Network for sharing information on nuclear weapons complex recruitment and retention problems and strategies. Additionally, Savannah River and Los Alamos engage in an employee exchange program that allows them to temporarily exchange staff with specific knowledge about tritium, a radioactive isotope of hydrogen that both facilities work with. Human resource officials from organizations with comparable workforces identified challenges similar to those faced by NNSA contractors in recruiting and retaining a critically skilled workforce. For example, most human resource officials from these organizations cited the amount of time it takes newly hired staff to obtain security clearances as being a challenge. These officials said that security clearances for new employees have been taking from 11 to 18 months, but they believe they have been able to lessen the impact of these delays by, for example, providing new employees with meaningful, nonclassified work to do while awaiting clearances. Additionally, one of these organizations—the Applied Physics Laboratory—addresses the problems associated with clearance delays by seeking staff who already have clearances. Much as NNSA seeks already cleared and trained individuals from nuclear weapons complex facilities that are closing or downsizing, the Applied Physics Laboratory targets Web sites and job fairs that specialize in attracting individuals who already have security clearances. Most human resource officials from organizations with comparable workforces also cited the declining pool of technically skilled workers as a challenge. Like NNSA, these organizations said they have a smaller group of candidates from which to recruit because there are fewer technically trained U.S. citizens available in the marketplace and fewer U.S. citizens working toward graduate degrees in engineering and science. To mitigate this challenge, most of these organizations have developed programs, such as internships, to encourage students to pursue careers in science and engineering. Some of these programs are designed to expose high school students to the opportunities that exist in these technical fields, while others are intended to encourage college students to pursue graduate degrees in these areas. In one such program, offered by the Naval Research Laboratory, students spend 8 weeks working full-time with scientists and engineers actively engaging in research and planning, participating in special program seminars, and writing and presenting a final research paper. Similarly, the Charles Stark Draper Laboratory offers a program that allows high school students to shadow its employees. Finally, human resource managers at two organizations with comparable workforces cited the challenge of recruiting staff to work in an area that has a high cost of living, similar to the difficulty expressed by NNSA contractors with staffing facilities in the San Francisco Bay area. Some organizations with comparable workforces have implemented strategies similar to those used by NNSA contractors to mitigate this challenge as well. For example, Exelon offers signing bonuses to help offset the cost of relocation, and the Charles Stark Draper Laboratory offers its new employees support in finding a neighborhood in which to live and helps employees’ spouses find work. In addition to facing ongoing challenges, NNSA contractors face a number of uncertainties the outcomes of which could affect their ability to maintain a critically skilled workforce into the future. These outcomes hinge on events and decisions over which NNSA contractors generally have little control. The contractors are therefore less able to develop strategies for addressing these uncertainties. For example, some NNSA contractors believe that they will face increased competition for science and engineering candidates, as well as other critically skilled employees, if the job market improves, since these workers will have more employment choices. Such increased competition would hinder the contractors’ ability to recruit and retain the critically skilled workforce needed to fulfill the facility’s mission, according to some contractors. Some of the programs designed to provide early exposure of college students to NNSA facilities through, for example, the Laboratory Critical Skills Development Program, will help to increase the chance that future candidates will be aware of and consider employment opportunities within the nuclear weapons complex. Some NNSA contractors also stated that their ability to maintain their critically skilled workforce into the future could be affected by budget and funding limitations, which could hinder workforce planning. These contractors said that the budget process, specifically the timing of the budget cycle and the uncertainty of budget reauthorizations, makes it difficult to bring in new job candidates when they are needed. The Nevada Test Site, for example, said budget shortfalls in its Experimentation Support division resulted in the termination of seven or eight employees in 2003, making it difficult for the division to maintain the workforce needed. The contractors stated that budget uncertainty also hinders their ability to bring on new staff in time to be trained by, and gather essential knowledge from, experienced staff who are near retirement. Some facilities have implemented knowledge retention initiatives designed to archive weapons data by, among other means, interviewing experienced weapons subject matter experts, to mitigate the effects of retirement timing. In addition, some NNSA contractors expressed concern about the number of their employees who are, or will be soon eligible, for retirement. If a large number of these employees chose to retire at one time, the facilities may not be able to ensure that critical knowledge is passed along to the newest generation of nuclear weapons workers. In general, the contractors felt that they were in a position to overcome the challenge imposed by anticipated future retirements, but some indicated that the uncertain outcome of future events could alter the impact of these retirements. According to contractor human resource officials, one issue that could influence the pace of future retirements is the contract rebidding process currently underway at Los Alamos. DOE announced that it will place the Los Alamos contract up for bid in 2005 and the Lawrence Livermore contracts up for bid some time after September 30, 2007, for the first time since their establishment. The current contractor for both of these facilities is the University of California. One concern about rebidding the contract is that it could be awarded to a new contractor that may provide a less attractive pension benefit package or may not bring some of the education advantages workers receive as employees of the University of California. These concerns could potentially result in multiple early retirements and affect a facility’s ability to perform its mission if the contract changes hands. NNSA may have mitigated some of these concerns when it issued an acquisition plan in September 2004 that required potential bidders on the contract to offer current workers at Los Alamos the same level of pension benefits as the current contractor. Furthermore, a different contractor may want to reassess the recruiting and retention strategies that will be used, such as the university recruiting program or fellowships offered, to ensure that they reflect any affiliations the new contractor may have. This could impact the contractor’s access to particular skill sets. Until this process is completed, it will be difficult to determine how Los Alamos’s critical skills capabilities are affected or whether this same issue will arise with future contract rebids. Finally, some NNSA contractors expressed concern that unexpected mission changes could affect their ability to recruit and retain individuals with needed critical skills. These facilities stated that unexpected changes in their long-term missions could make it difficult to plan for future skill needs and prevent them from obtaining the right mix of critical skills during recruiting. For example, one manager said it was critical for his facility to be responsive to programmatic changes, but to maintain that responsiveness they must have a mix of critically skilled workers who meet the needs of the current mission, as well as the needs required by a shift in the mission. A manager at another facility said he finds it difficult to plan for future skill needs because the NNSA mission for his facility is not stable in the short term. Furthermore, in 2001, President Bush announced his intent to significantly reduce the nation’s total operationally deployed nuclear weapons force by 2012. This could have ramifications in terms of the types and numbers of critically skilled workers required to meet this reduction and to ensure the safety and reliability of the remaining weapons in the stockpile. NNSA is guarding against the effects of this mission shift by continuing an advanced concepts program to enable scientists and engineers at the nuclear weapons laboratories to retain critical skills and to provide the United States with means to respond to new, unexpected, or emerging threats in a timely manner. While NNSA contractors have been generally effective in recruiting and retaining the critically skilled workforce needed currently, are well poised to maintain the critically skilled workforce that will be needed in the near future, and have successfully mitigated many of the challenges they have already faced, the future will almost certainly bring additional challenges and uncertainties the contractors will need to continue to stay aware of and address. Although some of these challenges may be outside the contractors’ immediate control—such as changes in economic conditions or shifts in NNSA’s mission—the test that lies ahead for these contractors will be in identifying these new challenges early and developing strategies to mitigate them wherever possible. In order for the nuclear weapons facilities to be able to locate and employ the critically skilled workforce needed to ensure the safety and reliability of the stockpile, NNSA and its contractors will need to remain vigilant and focused in their recruiting and retention efforts, as well as anticipate, and appropriately plan for, future critical skill needs and shortages. We provided NNSA with a draft of this report for its review and comment. In oral comments, NNSA agreed with the report. We are sending copies of this report to the Secretary of Energy; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions, please call me at (202) 512-3841. Key contributors to this report are listed in appendix III. As part of our overall approach to examine the National Nuclear Security Administration (NNSA) contractors’ ability to recruit and retain the critically skilled workforce needed to maintain the safety and reliability of the nuclear weapons stockpile, we visited six of the eight nuclear weapons complex facilities—Lawrence Livermore National Laboratory, Los Alamos National Laboratory, Sandia National Laboratories, the Kansas City Plant, the Pantex Plant, and the Y-12 Plant. The remaining two sites—the Savannah River Site and the Nevada Test Site—have the smallest number of critically skilled workers and we conducted extensive telephone interviews with human resource and workforce, planning managers at these facilities. We also sent each facility a standard set of interview questions and received responses from each facility. As part of our review of the contractors’ efforts, we interviewed a nonprobability sample of 20 managers from all eight facilities. We requested names of at least two managers in the Stockpile Stewardship Program from the human resource managers at each facility. We then conducted structured interviews with these managers, either in person or by telephone. In particular, we discussed the managers’ involvement in recruiting, retaining, and planning for workforce needs at the facility. We also gained their perspective on the ongoing recruiting and retentions challenges their facilities face and whether they felt their facility would be able to maintain the critically skilled workforce needed to fulfill their mission. To describe the approaches NNSA contractors are using to recruit and retain a critically skilled workforce, we spoke with human resource managers at each of the eight NNSA nuclear weapons complex facilities. Specifically, we discussed the approaches the facilities use to recruit and retain a critically skilled workforce and the ways in which NNSA has supported the contractors’ efforts. We also reviewed documentation of the recruitment and retention approaches used at each facility, including human resource managers’ responses to our written request for specific information. In addition, we interviewed NNSA officials at headquarters, the site offices for most of the facilities, and the NNSA Service Center in Albuquerque, New Mexico. We discussed with these officials the ways in which they have supported contractors’ efforts to recruit and retain their critically skilled workforce. To assess the similarities of approaches used by organizations with comparable workforces, we spoke with human resource representatives from six research and advanced technology organizations with comparable workforces to determine the extent to which these industries use recruiting and retention approaches similar to those used by NNSA. These organizations are as follows: Applied Physics Laboratory in Laurel, Maryland: a division of the Johns Hopkins University, operates specialized research and test facilities; Charles Stark Draper Laboratory in Cambridge, Massachusetts: an independent laboratory that contracts with a number of government agencies; Exelon Corporation headquartered in Chicago, Illinois: an energy Jet Propulsion Laboratory in Pasadena, California: operated by the California Institute of Technology for the National Aeronautics and Space Administration; Lockheed Martin Corporation headquartered in Bethesda, Maryland: a major federal government contractor; and Naval Research Laboratory in Washington, D.C.: operated by the Navy. We also spoke with two industry associations representing manufacturing and nuclear materials industries—the National Association of Manufacturing and Institute of Nuclear Materials Management. We selected these eight organizations based on the following criteria: their selection by the Chiles Commission as benchmarking organizations; their geographic dispersal; and their representation of different high technology, laboratory, or manufacturing industry segments. We reviewed the Chiles Commission report and determined it was methodologically sound enough for the purposes of this report. To assess the effectiveness of the approaches used to recruit and retain critically skilled workers, we collected a variety of workforce data from each facility, including total numbers of Defense Program and critically skilled workers and average ages of these workers broken out by job classification, hiring and attrition trends, average retirement ages, and forecasted needs for critically skilled workers. To assess the reliability of these data, we reviewed relevant documentation, interviewed cognizant contractor officials, and obtained reponses from key database officials to a series of data reliability questions covering issues such as data entry, access, quality control procedures, and the accuracy and completeness of the data. Follow-up questions were added whenever necessary. We determined that the data were sufficiently reliable for the purposes of this report. In addition, we obtained documentation of each facility’s workforce planning process and evaluated that process using our five principles of strategic workforce planning. These five principles are (1) involving management and employees in developing and implementing the strategic workforce plan, (2) determining critical skills needs through workforce gap analysis, (3) developing workforce strategies to fill gaps, (4) building needed capabilities to support workforce strategies, and (5) monitoring and evaluating progress in achieving goals. We also interviewed human resource managers at each facility to determine the kinds of recruiting and retention strategies they have implemented to support their workforce planning processes. To determine the extent to which NNSA monitors and evaluates contractor progress we interviewed NNSA site officials responsible for performance management, as well as each facility’s human resource managers. Finally, we analyzed the responses of stockpile stewardship managers to our structured interview to determine whether the managers believe their facility had and could maintain the critically skilled workforce needed to fulfill their mission, the reasons for these beliefs, and the extent to which the managers are involved in the workforce planning process. Regarding the ongoing challenges that NNSA contractors face in recruiting and retaining a critically skilled workforce, we spoke with human resource, workforce planning, and stockpile stewardship program managers. Specifically, we discussed ongoing recruitment and retention challenges, strategies used to mitigate those challenges, and future uncertainties that may affect the facilities’ abilities to recruit and retain the critically skilled workers needed. To further identify any remaining challenges and uncertainties, we reviewed the contractors’ responses to our written questions. To assess the extent to which the remaining challenges, and the strategies used to mitigate these challenges, are similar to those of organizations with comparable workforces, we spoke with human resource representatives from the six research and advanced technology organizations with comparable workforces and the two industry associations. We conducted our work from February 2004 through January 2005 in accordance with generally accepted government auditing standards. Utilize University of California strengths to recruit, retain, and develop the workforce basis Recruit and retain a skilled and diverse workforce that meets the laboratories’ long-range core and critical skills requirements by implementing a human resource strategy that leverages student programs and University of California relationships. Sandia management focuses on renewal and retainment of its workforce and the transfer of knowledge to ensure the future of the Nuclear Weapons Complex such that it can continue to perform its mission for the nation in the future years. Sandia implements a comprehensive program for workforce planning and diversity that includes recruitment, training, and knowledge transfer necessary to meet the long-range core and critical skills requirements. Demonstrate effective workforce planning to assure the current and future workforce critical skills, including technical, program/project managers and administrative personnel, are adequate to meet future workforce skills needs and are consistent with contract performance. Develop and exercise critical skills, capabilities, and personnel. Fill planned critical skill vacancies calculated from the latest biannual report “Maintenance of Nuclear Weapons Expertise Data for NNSA Performance Metrics.” Maintain planned staffing in critical skill personnel calculated from the latest biannual report “Maintenance of Nuclear Weapons Expertise Data for NNSA Performance Metrics.” Complete required training and qualification of critical skill personnel with appropriate clearance and/or PAP. Focus Area – Technical Capability Knowledge preservation Engineering qualifications Filled critical skill positions BWXT Y-12 will take measures to ensure that the critical skills needed to support the Y-12 workload are available and fully trained or in a training program to ensure ability to perform duties as required in the future. The critical skills database is complete and updated on a quarterly basis to consistently provide accurate numbers of vacant critical skills positions. Programs are in place to continually replenish the pipeline of new critical skills employees and ensure the appropriate development programs are available to allow the new employees to perform critical duties. Demonstrate improvement in the following emphasis areas selected from the Project Management Body of Knowledge: Improve Critical Skills Management: Identify critical skills of project managers and ensure they possess the requisite skills to successfully perform defined tasks. In addition to those named above, Elizabeth Erdmann, Robert Sanchez, and Corrie Burtch made key contributions to this report. Also contributing to this report were Nancy Crothers, Judy Pagano, and Katherine Raheb.
Responsibility for ensuring the safety and reliability of the nuclear weapons stockpile rests upon a cadre of workers at eight contractor-operated National Nuclear Security Administration (NNSA) weapons facilities. Many of these workers--including scientists, engineers, and technicians--have "critical" skills needed to maintain the stockpile. About 37 percent of these workers are at or near retirement age, raising concern about whether these specialists will have time to pass on their knowledge and expertise to new recruits. In this context, Congress asked us to (1) describe the approaches that NNSA, its contractors, and organizations with similar workforces are using to recruit and retain critically skilled workers; (2) assess the extent to which these approaches have been effective; and (3) describe any remaining challenges, strategies to mitigate these challenges, and the similarity of these challenges and strategies to those of organizations with comparable workforces. NNSA contractors have each developed and implemented a multifaceted approach to recruit and retain critically skilled workers. These approaches are similar to those used by six organizations with comparable workforces with whom GAO spoke and consist of combinations of activities tailored to meet the specific needs of each facility. These activities include offering internships and providing knowledge transfer opportunities. NNSA has supported the contractors' efforts by, for example, providing additional funding to help them recruit workers to fill critically skilled positions. The efforts of NNSA's contractors to recruit and retain a critically skilled workforce have been generally effective. The contractors' fiscal year 2000 through 2003 data show that all eight facilities have maintained the critically skilled workforce needed to fulfill its current mission. In addition, our review of the workforce planning processes of each facility shows that they have incorporated, to varying degrees, the five principles GAO has identified as essential to strategic workforce planning. Finally, most of the program managers GAO spoke with believe their facilities have, and are well poised to maintain, the critically skilled workforce needed to fulfill their mission. NNSA contractors and the six organizations with comparable workforces face ongoing challenges in recruiting and retaining a critically skilled workforce, but are using a number of similar strategies to mitigate most of these challenges. These challenges include the amount of time it takes new staff to obtain security clearances and a shrinking pool of technically trained potential employees. Beyond such identifiable challenges, NNSA contractors also face future uncertainties, such as the possibility that a new contractor might be awarded the contract and shifts in their mission that could affect their ability to recruit and retain a critically skilled workforce in the future.
Since 1975, the United States government has provided more than $25 billion in economic assistance to Egypt. The U.S. continues to support Egypt, in part because of its political leadership in making peace with Israel and fostering a broader peace between Israel, the Palestinians, and other Arab states, including its efforts in the war on terrorism. U.S assistance to Egypt has three components. (1) The Cash Transfer Program provides assistance conditioned on the Egyptian government’s achievement of specific reform-related activities. (2) Traditional project assistance focuses on, among other things, economic reform, health and education, and the environment. (3) The Commodity Import Program supplies financing to Egyptian private sector importers of U.S. goods and provides funding that is not specifically conditioned on any reforms. In 1998, the United States and the Egyptian government agreed to reduce annual U.S. economic support by $40 million per year, resulting in a reduction of total funds from $815 million to $407 million by fiscal year 2009. Annual Cash Transfer Program appropriations are projected to remain constant at about $200 million annually until fiscal year 2007 and decline to $150 million by fiscal year 2009. In the early 1990s, USAID focused on fostering economic reforms aimed at achieving a stable macroeconomic environment in Egypt, then shifted its focus to encouraging economic growth and development in the mid-1990s. The goal of USAID’s program activities in Egypt is to create “a globally competitive economy benefiting Egyptians equitably.” Although Egypt has made progress at the macroeconomic level, including reducing inflation and unifying the exchange rate, USAID has stated Egypt’s continued state ownership of companies and banks and its existing laws and regulations continue to hinder its transition to a market economy. The Cash Transfer Program began as the Sector Policy Reform (SPR) program and complemented Egypt’s economic reform and structural adjustment program, which began in 1991. In 1999, the program was renamed the Development Support Program (DSP) and focused on improving the trade and investment environment and increasing private sector employment. The Cash Transfer Program targeted eight general reform areas until the State and USAID review in 2002 recommended that it be narrowed to a single sector. These areas include the following: Financial sector—reforming the banking and insurance industries; Trade sector—reducing trade barriers; Industrial sector—privatizing state-owned companies; Business law and regulation—modernizing laws that affect business and trade such as intellectual property rights laws; Fiscal sector—establishing tax policy and improving public access to Monetary policy—reforming foreign exchange rate and domestic credit Data dissemination—improving standardization of data and statistical Environmental protection—improving environmental regulations and conservation. Funding for the Cash Transfer Program is provided in annual appropriations to the Economic Support Fund. The annual appropriation makes this funding available for the program with the understanding that Egypt will undertake economic reforms in addition to those it has undertaken in previous years. Grant agreements and MOUs between the U.S. and Egyptian governments outline the ways that Egypt may use program funds and the types of reform-related activities that it must undertake to receive them. According to the grant agreements, the Egyptian government is authorized to use 75 percent or more of the funds to purchase U.S. commodities, such as wheat or equipment, and up to 25 percent to repay its debt to the United States. After purchasing commodities in U.S. dollars that the program provides, the Egyptian government must deposit the equivalent amount of Egyptian local currency into a special account. A separate MOU between USAID and the Egyptian government stipulates that the government of Egypt may use funds from this account for its general budget, sector support, or USAID activities. In addition to providing a source of budgetary funds for the Egyptian government, the Cash Transfer Program serves as an additional source of foreign exchange for Egypt. Figure 1 depicts the flow and use of Cash Transfer Program funds. According to USAID officials, if the Egyptian government does not meet the agreed-on criteria for a specific activity, USAID may withhold funding and redirect it to support other Cash Transfer Program activities. To determine the funding that Egypt should receive for completing each reform activity, USAID considers the activity’s significance, the cost to the Egyptian government associated with completing it, the Egyptian government’s willingness to implement it, the U.S. government’s interest in seeing it implemented, and the contribution of the completed activity to Egypt’s economic growth. Since fiscal year 1992, USAID’s Cash Transfer Program has provided about $1.8 billion in financial assistance to the Egyptian government. USAID has also provided about $70 million in technical assistance to the Egyptian government to support Cash Transfer-related reform activities. The Egyptian government has completed 136 of the 196 activities targeted for reform, including financial and industrial sector reforms. The Egyptian government did not complete 60 of the targeted activities and consequently received no money for them, in part because of a change in the program in 1999 that allowed Egypt to choose from a menu of reform activities. Although Egypt completed many activities, the results of some of the targeted reform areas were mixed. USAID’s disbursements per activity ranged from $0.1 million to $150 million, with a median disbursement of $10 million. However, Egypt received about $730 million, or 40 percent of total funding, for 20 of the completed activities. USAID disbursed about $1.8 billion in Cash Transfer Program funds to the government of Egypt as it completed agreed-on economic reform activities, primarily in the areas of finance and trade. Figure 2 shows the distribution of program funding among the reform areas since 1992. In general, the targeted activities represented the steps associated with a major reform in agreed-on areas. For example, privatizing state-owned companies, an aspect of industrial sector reform, required initial steps such as conducting a study of the social and economic costs and the benefits of opening an industry to private investment, intermediate steps such as developing a national privatization plan, and final steps such as privatizing a number of companies. In addition, USAID provided approximately $70 million in technical assistance, in part to support the Egyptian government’s completion of agreed-on activities. For example, USAID’s technical assistance funding supported the government of Egypt in formulating and implementing policies, training government personnel, and purchasing office equipment. USAID also used technical assistance funds to hire contractors to monitor and verify the government of Egypt’s completion of agreed-on activities. According to Egyptian officials, USAID’s technical assistance helped the Egyptian government complete intellectual property rights (IPR) reform activities targeted under the Cash Transfer Program, such as drafting an IPR code, and provided technical information on the World Trade Organization’s (WTO) Trade-Related Aspects of Intellectual Property Rights requirements. Technical assistance funds were also used to promote public awareness of the importance of IPR and modernize the patent and copyright offices. Beginning in 1992, USAID and the Egyptian government identified 196 activities for reform. Egypt completed and received program funds for 136 of those activities (about 70 percent) and did not complete or receive money for 60 of them. Egypt had the option of not completing some of them, because under a new program approach initiated in 1999, USAID and the Egyptian government agreed to a broad range of activities that allowed Egypt to select those that best fit its reform agenda. The 60 uncompleted activities included 3 of 25 activities targeted for reform in a 2001 MOU. Egypt did not complete these 3 activities because the U.S. government unilaterally withdrew from the program following the 2002 review by State and USAID. Table 1 shows some of the 60 activities that the government of Egypt did not complete or receive program funds for under the terms of the program. We found that roughly 10 percent of the completed activities involved conducting studies and reviews, 10 percent involved drafting laws and issuing decrees, and about 20 percent involved adopting plans or implementing procedures. The remaining 60 percent involved various sector specific activities, such as reducing the number of tariffs and removing price controls from hotels, restaurants, and other businesses associated with tourism. To understand the range of activities USAID targeted and the extent to which Egypt completed them, we analyzed reform activities in three subsets of the program’s general reform areas—privatization (industrial sector), banking (financial sector), and IPR (business law and regulation)—and found that the results of these targeted activities were mixed. For example, although Egypt privatized some of its state-owned companies and its joint venture banks, less than half of its state-owned companies and none of its public sector banks had been privatized as of May 2005. Likewise, according to USAID, although Egypt has improved its IPR legal framework, the country still does not fully comply with international standards for pharmaceutical protection. USAID disbursed $169 million for the completion of 12 privatization activities in fiscal years 1992-2004. Six of the activities supported privatization of companies—for example, developing a national privatization plan, reducing indebtedness of public companies, and improving the process for valuing of them. The other six activities consisted of actually privatizing public and joint venture companies. In 1991, the Egyptian government identified 314 companies to be privatized. During fiscal years 1993-2002, it privatized 118 of these companies, 3 of which were joint ventures, according to Cash Transfer Program disbursement justifications (see fig. 3). Privatizations slowly increased through fiscal year 1996, peaked in fiscal year 1998, and then declined through fiscal year 2002. Of the 196 remaining public companies, some are profitable but others are unprofitable, including indebted textile companies. According to Egyptian government and USAID officials, as well as documents we reviewed, three reasons contributed to the decline in privatizations after 1998. (1) The companies remaining on the list were difficult to sell because of unprofitability or were difficult to liquidate because of concerns about mass layoffs. (2) The Egyptian government did not fully support privatization, as demonstrated by its unwillingness to realistically value companies. (3) The global economic environment became less conducive to privatization, particularly in the Middle East after September 11, 2001. According to a USAID contractor responsible for tracking privatization progress, the best companies were sold initially and many of the remaining companies required large investments to stay operational. In fiscal years 1992-2004, USAID disbursed $260 million for the completion of 11 activities associated with banking reforms in Egypt. According to USAID documents, banking activities focused on improving regulations and strengthening banks’ capital structure, including the following: allowing foreign banks to establish branches or gain full ownership of local banks and deal in local currency; removing controls on banking service fees; establishing new accounting and auditing procedures for all sectors, including insurance, based on the International Accounting Standards; revitalizing capital markets and improve the transparency of financial ratifying an anti-money-laundering law, satisfying the Organization for Economic Cooperation and Development’s Financial Action Task Force on Money Laundering, and reducing Egyptian government shares in 22 of 25 joint venture banks in Egypt. Additionally, the Egyptian government agreed to transfer majority ownership of at least one of its state-owned banks under the Cash Transfer Program in 1995 and sell its shares in the four public sector banks in 2000. Although the Egyptian government identified the first bank for privatization in 1997, none of the four state-owned banks had been privatized as of May 2005. In March 2005, the Egyptian government signed a new MOU with USAID that once again made the privatization of a state bank a priority. According to U.S. and Egyptian officials, a public sector bank is expected to be privatized by the end of 2005, owing in part to increased political will in Egypt’s cabinet and more experienced management in the Central Bank of Egypt. However, a Central Bank senior official stated that the sale of the public sector bank depends on its handling of nonperforming loans and finding an investor, which could prove difficult. Business Law/Regulation: Intellectual Property Rights USAID disbursed about $75 million for the completion of six IPR-related reform activities undertaken to support Egypt’s commitments under the WTO’s agreement on Trade-Related Aspects of Intellectual Property Rights. According to a USAID contractor, although the reforms that Egypt completed moved it closer to international IPR standards, more reforms are needed for Egypt to achieve compliance with international standards related to data dissemination and data exclusivity for pharmaceuticals. The activities that USAID supported included the following: conducting a study of the benefits of consolidating the patent, trademark, and international design offices; joining the Patent Cooperation Treaty in June 2003; and enacting a new IPR law in May 2002 that incorporated both the patent and industrial design laws. Since 1992, USAID has disbursed between $0.1 million and $150 million to Egypt for completed activities, with a median disbursement of $10 million per activity; however, the Egyptian government received large disbursements, totaling about $730 million (40 percent) of total program funds, for 20 of the 136 activities that it completed. Our analysis showed that the largest disbursements included $150 million for passing an anti- money-laundering law and $100 million for developing a macroeconomic reform plan approved by the government. Antimoney-laundering law. In 2002, Egypt received a disbursement of $150 million for passing a law against money laundering. According to USAID, as Egypt’s economy and financial systems have opened they have become more susceptible to use as conduits for laundering proceeds from criminal activities such as drug trafficking and terrorism. USAID first targeted anti-money-laundering reform in a 2001 MOU and formally specified the requirements for passage of a law in June 2002, the same month Egypt received a disbursement for completing the activity. According to USAID documentation, Egypt received a large disbursement because the law addressed the concern of the Organization for Economic Cooperation and Development’s Financial Action Task Force that Egypt was noncooperative in addressing money- laundering problems. Macroeconomic reform plan. In 2001, Egypt received a disbursement of $100 million for developing and approving a macroeconomic reform plan. According to USAID, this activity supported major Egyptian reforms in the face of the economic crisis after September 11, 2001, that included Egypt’s slowed growth and loss of revenues. Among the areas the plan addressed were Egypt’s exchange rate, monetary and fiscal policy, and legislation for issues such as IPR, labor and mortgage laws, and banking reform. According to USAID documents, the plan was added to the program in December 2001 and was approved by the U.S. Ambassador to Egypt, the Egyptian Prime Minister, and several of Egypt’s economic ministers before the disbursement in January 2002. Our review of program activities showed that USAID generally paid the Egyptian government once for each activity. However, we identified one instance in which USAID made disbursements twice to Egypt for the privatization of three companies. In 1998, USAID disbursed to the Egyptian government $20 million dollars for meeting criteria to privatize 25 companies. Egypt’s privatization of three of these companies—United Poultry Production, Ramsis Agriculture, and the Egyptian Company for Meat and Dairy Production—were listed in subsequent documents as justification for additional disbursements of $2.4 million in 2000 and $1.2 million in 2001. In commenting on a draft of this report, USAID officials agreed that these disbursements had occurred. Although the Cash Transfer Program provided financial and technical assistance to support Egypt’s completion of reform-related activities, several factors have limited the program’s ability to influence Egypt to undertake certain reforms. These factors include the following: Financial costs versus benefits of reform-related activities. Although the reforms are expected to generate financial benefits by correcting inefficiencies in the economy, financial costs are also associated with the reforms. For example, the authors of a USAID-sponsored study estimated the financial benefit to the Egyptian government of privatizing the remaining state-owned companies and banks would be over $17 billion. However, the financial benefits are not guaranteed; privatized companies may not operate more efficiently or produce additional tax revenues. For example, according to USAID’s privatization study, some privatized Egyptian companies failed to undertake significant restructuring and therefore did not produce many of the expected benefits they were supposed to bring, including improved financial performance. Size of program funding relative to Egypt’s overall foreign exchange earnings and revenue. Although the Cash Transfer payment provides U.S. dollars for the purchase of U.S. commodities and repayment of debt to the United States, the payment represents a small portion of Egypt’s foreign exchange earnings. For example, in fiscal year 2003, the program’s funds accounted for about 1 percent of Egypt’s overall foreign exchange earnings and less than 2 percent of its foreign reserves. Additionally, the Egyptian pounds generated by the program and used by the Egyptian government for budget support are a small portion of its revenue; in fiscal year 2003, the program funds accounted for about 1 percent of the government of Egypt’s annual revenues and grants. According to a State Department official, the $200 million that the program provides annually is not sufficient by itself to persuade the Egyptian government to undertake an unpopular reform such as privatization. Reforms’ potential effects on domestic stability. Since IMF sponsored economic reforms triggered protests and domestic unrest among Egypt’s populace in the 1970s, the government of Egypt has been cautious about reforms such as liberalizing prices and lifting subsidies because of the potential negative impact on certain groups. For example, in January 2003, the Egyptian government introduced a more flexible and market-oriented exchange rate regime as outlined in its macroeconomic reform plan. However, as a result, the value of the Egyptian pound fell by about 30 percent in its initial months, raising the prices that Egyptians paid for imported goods, including food. According to the IMF, the Ministry of Finance partially negated this change when in September 2003, concerned about rising food prices, it introduced a special exchange rate for imported items such as grains. Cash Transfer Program deadlines. The Cash Transfer Program was designed to provide funding to Egypt based on its compliance with agreed on conditions, including meeting deadlines. Since fiscal year 1992, the government of Egypt requested, and USAID granted, 19 extensions to the deadlines originally agreed to in MOUs. The extensions were generally for an additional 3 to 6 months; however, certain performance periods had multiple extensions that allowed the government of Egypt 2 or more years from the original deadline to complete the activities. Although USAID documents justified the extensions, the number and length of extensions may have weakened the conditions tied to the funding disbursement by assuring that Egypt continued to receive funds, in some cases well beyond the established deadlines. For example, in March 1997, USAID extended the Egyptian government’s deadline to complete agreed-on activities for an additional 6 months and then granted five other extensions, resulting in a final deadline of September 1999. According to USAID documents we reviewed, the 1997 extensions were granted to give the government of Egypt time to negotiate a new IMF agreement, prepare for the Cairo economic summit, and redefine reform priorities after a shift in Cabinet members—factors that contributed to a delay in Egypt’s completion of Cash Transfer Program reform activities. USAID approved additional deadline extensions for a separate set of activities in September 1997, subsequently extending it four more times to June 2000. The need to tighten conditions tied to the program’s funding disbursements was highlighted in the 2002 review by State and USAID. Demonstrating the impact of policy reforms is challenging, according to USAID officials and academics studying such reforms; however, USAID conducted two evaluations related to the Cash Transfer Program activities, as well as a series of opinion surveys that attempted to assess the impact of economic reform activities supported by the program. These evaluations and opinion surveys reported that the program’s activities had some positive results, but we found limitations with the two studies. We also reviewed USAID’s performance management plan (PMP) and found that some of its measures had limitations. USAID officials and academics studying policy reform pointed out the difficulties of demonstrating the impact of policy reforms. USAID officials stated that it is nearly impossible to isolate the impact of the Cash Transfer program from other factors that have influenced Egypt’s trade and investment environment. In addition, collecting reliable data is problematic in Egypt. For example, according to a USAID contractor responsible for tracking Egypt’s privatization efforts, basic data, such as the number of privatizations completed, were not readily available from the Egyptian government. Furthermore, the nature of policy reform often results in delayed impacts, thus evaluations cannot take place until sufficient time passes. For example, according to USAID’s privatization evaluation, it was difficult to draw definitive conclusions regarding financial performance for some of the companies because of inadequate time between the privatizations and evaluation. Although measuring the impact of reforms is difficult, USAID continues to fund various assessments. To evaluate the impact of Cash Transfer Program activities in Egypt, USAID conducted two agency-funded studies, twelve opinion surveys of private sector business leaders, and linked some activities to its PMP. However, we found that some measures that USAID used in their performance management system had limitations. A study of USAID-supported privatization activities published in 2002 pointed out some positive impacts, such as helping to reduce Egypt’s fiscal deficit, facilitating the entrance of new companies into Egypt’s market, expanding product varieties and availability, and improving some firms’ financial performance. However, the study also found that privatization did not increase Egypt’s foreign direct investment relative to other developing countries although this reform was related to USAID’s strategic objective of enhancing Egyptian business opportunities by attracting private sector investment. In addition, the study acknowledged some challenges in evaluating privatization’s impact because access to some of the Egyptian government’s data was limited and sufficient time had not passed to assess the financial performance of some privatized companies. A study of USAID technical assistance for Egypt’s IPR reforms published in 2004 found that this assistance, among other factors, motivated the Egyptian government to implement reforms. The study found that these reforms led to: a legal framework that was more compliant with WTO IPR requirements; a modernized IPR-related facilities; a reduction in the time required to obtain a patent from 6 years in 1996 to less than 3 years in 2003; and, increased public awareness of the benefits of intellectual property rights. However, the study focused on the outcomes of USAID’s technical assistance to support the completion of IPR-related activities rather than on the impact of these activities on the Egyptian economy, such as increasing confidence among foreign investors, with regard to doing business in Egypt. Opinion surveys USAID conducted a series of periodic questionnaires and roundtables with Egyptian private sector leaders and academics to gauge the progress and impact of the Egyptian government’s economic reforms and structural adjustment program. Every 6 months for a 6-year period, respondents were asked to score and give opinions on 24 policy areas in three main categories: stabilization policies, structural adjustment policies, and social policies. The surveys found that, in general, business leaders agreed that the Egyptian government had taken modest steps forward in stabilization policies. However, the survey also found that business leaders were concerned about slow progress in many reform areas, such as banking and privatization, exchange rates, and the growth of small and midsize businesses. USAID recognized that the survey scores and opinions are subjective. However, USAID pointed out that respondents’ perceptions of the behavior of the domestic and international marketplace served as a proxy for the larger business community’s opinion of Egypt’s progress with economic reform. As a result, USAID said the survey could be a useful tool for evaluating policy initiatives. USAID’s performance management plan USAID uses its PMP to measure progress toward strategic goals and objectives. Although the PMP does not directly evaluate the impact of the Cash Transfer Program, USAID pointed us to it as a measure of the program’s progress and impact on the Egyptian economy during our fieldwork. However, we found limitations in some of the indicators used in the PMP in that they primarily assess outputs and do not link the outputs to the activities’ impact on Egyptian economic reform. For example, two privatization indicators—the value of sale proceeds from privatized state-owned and joint venture companies and the cumulative number of qualified joint venture companies and banks divested—measured the quantitative results of privatization rather than the activities’ effects on the companies’ efficiency. Other PMP indicators were influenced by factors outside the program. For example, USAID used the indicator trade weighted average tariffs to show progress in reducing trade barriers; however, other factors, such as a shift to imports with lower tariffs, could have contributed to a reduction in trade barriers, and thus the change could not be attributed only to the reform’s impact. USAID has taken several steps to address the 2002 State and USAID review’s recommendations that the agency narrow the focus of the Cash Transfer Program, reprogram funds if deadlines for reform-related activities are not met, and improve the USAID mission in Egypt performance monitoring system. 1. To narrow the program’s focus, USAID signed an MOU on March 20, 2005, focused on reforming the financial sector. The new MOU aims to support financial sector modernization by: strengthening the management of the Central Bank of Egypt; creating a government securities market consistent with international increasing the private sector’s share of the banking system; strengthening the legal and regulatory framework of the overall implementing a code of corporate governance. USAID and the Egyptian government agreed to 19 reform-related activities that support these goals, including privatizing one of four state-owned banks before the end of December 2005. According to USAID officials, if the Egyptian government completes all 19 reform- related activities by the agreed-on time frames, it will receive disbursements of $800 million, or 67 percent of the $1.2 billion that USAID expects to obligate for the Cash Transfer Program through fiscal year 2009. USAID has not yet determined the conditions for disbursement of the remaining funds, but a State official stated that the agency will likely target trade reform. 2. To respond to the review’s recommendation to reprogram funds if reform-related activity deadlines are not met, USAID changed its process for obligating Cash Transfer funds. Beginning in 2005, USAID will obligate Cash Transfer funds only after it is certain that the Egyptian government will complete agreed-on activities. This change is intended to ensure that obligated funds do not accumulate and to strengthen USAID’s ability to encourage the Egyptian government to satisfy activity requirements. 3. To improve its measurement of the mission’s programs’ contribution to meeting USAID strategic objectives, the agency contracted to revise its performance monitoring system by updating its PMP. The review by State and USAID indicated that the previous system did not allow USAID to reprogram resources from programs that were not producing desired results. According to a USAID official, the system’s measures also did not provide timely information for management purposes. However, at the time of our review, details of PMP revisions were not available; therefore, we were unable to determine how USAID will address the issues raised by the review and whether the new system will improve evaluation of Cash Transfer Program activities. Although the Cash Transfer Program has supported reform-related activities since 1992, several factors have constrained its influence and potential to be a more effective force for change in Egypt. However, the recently signed MOU, which makes deadlines explicit for the first time, and Egypt’s renewed political support to undertake certain reforms, such as bank privatization, offers a new opportunity to better leverage this program. For example, by not making funds available until it is clear that Egypt will complete program activities, USAID is strengthening conditionality. Given the recent program changes and Egypt’s regional importance, it is critical that policymakers continue to monitor Egypt’s progress in achieving economic reform and the Cash Transfer Program’s contribution to those reforms, especially in light of the broader political environment in which the program operates. The Acting Assistant Administrator for USAID, Bureau of Management, provided written comments on a draft of this report, which are reproduced in appendix II. He stated that the draft was fair and clear, but that the Egyptian government’s completion of 70 percent of the 196 agreed-on activities related substantially to the Cash Transfer Program’s structure rather than to shortcomings in Egypt’s policy reforms. Additionally, USAID stated that granting deadline extensions to permit Egypt to complete activities increased the U.S. government’s influence in accomplishing reforms. USAID also provided technical comments as did the Department of State, which we incorporated where appropriate. As arranged with your office, we plan no further distribution of this report for 30 days from the date of the report unless you publicly announce its contents earlier. At that time, we will send copies to interested congressional committees and to the Administrator, USAID and the Secretary of State. We will make copies available to others upon request. In addition, this report will be available at no extra charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at 202-512-3149 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This review for the Chairman of the House Committee on International Relations, focused on (1) the Cash Transfer Program’s disbursement of funds and Egypt’s completion of agreed-on activities since in fiscal year 1992, (2) factors affecting the program’s influence on Egypt’s economic reform activities, (3) U.S. Agency for International Development’s (USAID) efforts to evaluate the program’s impact on Egypt’s economic reform, and (4) USAID’s changes to the program in response to the 2002 Department of State and USAID review. To identify the requirements of the Cash Transfer Program and total funding disbursed since fiscal year 1992, we reviewed legislation and USAID’s regulations, grant agreements, and memorandum of understanding. We reviewed USAID’s program documents and interviewed USAID and State officials to learn how USAID identified reform areas and assigned values for reform-related activities and to determine the number and types of activities that the government of Egypt completed. We also developed a database of all the activities targeted by USAID to identify: the number of completed and uncompleted activities, the various types of activities, any duplication in the activities or payments, and any changes to the originally targeted and agreed-on activities. We developed the database using attachments to the program’s memorandum of understanding (MOU), which included activities that USAID and the Egyptian government agreed to undertake in support of economic reforms. To determine the number of completed activities, we used USAID disbursement memorandums. We also included activities for which USAID disbursed program funds although the entire activity was not completed (in 20 instances, USAID disbursed partial payments based on the percentage of the activity that it determined Egypt had completed). We did not include activities that Egypt completed outside the terms of the program because there were no disbursement memorandums for these activities. We corroborated testimonial evidence from USAID, State, and Egyptian officials with our analysis of program activities by reviewing and comparing information in USAID’s program documents. We reviewed internal controls, including reports by USAID’s Office of Inspector General, as well as potential for fraud and abuse and compliance with laws and regulations and found no significant issues. We did not independently evaluate Egyptian laws and regulations, and our discussion of them is based on secondary sources. To better understand the range of activities targeted and completed under the program, we selected three subsets of the reform areas. The three subsets for which we conducted a more in-depth analysis were privatization, banking, and intellectual property rights (IPR) and these subsets covered 29 out of 196 targeted activities (15 percent). They did not constitute a representative sample of all the activities; however, we consulted USAID and identified areas with varying results. For example, USAID characterized IPR reforms as successful, whereas banking and privatization had varied levels of success. To determine factors affecting the program’s influence on economic reform in Egypt, we reviewed documents and interviewed U.S. and Egyptian officials with knowledge of the Cash Transfer Program. We also met with International Monetary Fund (IMF), World Bank, and European Union officials, as well as an expert at the Egyptian Center for Economic Studies. We reviewed studies on the benefits and the costs of reforms that were published in refereed journals or completed by USAID contractors; we also interviewed the authors of one of these studies and discussed their methodology. To determine the monetary significance of the Cash Transfer Program, we calculated the program’s annual share of Egypt’s foreign exchange earnings and government revenue using relevant data provided by the Egyptian government to the IMF. Although we were unable to fully assess the reliability of the Egyptian government data, we noted that Egypt recently subscribed to the IMF's special data dissemination standards, which were created for nations that already meet high data-quality standards. Although these data probably have some limitations, we determined that it is unlikely that potential errors would materially impact our use of this information in our report. To identify other factors affecting the program’s influence, we analyzed program documents to calculate the number of times that the Egyptian government requested, and USAID granted, deadline extensions to complete program activities. To assess USAID’s evaluation of the program’s impact on economic reform, we reviewed studies that addressed the methodological challenges of conducting impact evaluations. We also met with, and reviewed studies by, contractors hired by USAID to assess the impact of two program areas, IPR and privatization. In addition, we reviewed USAID’s opinion surveys reports and performance management plan. To determine the steps that USAID has taken in response to the 2002 review by State and USAID, we interviewed USAID and State officials who participated in or had knowledge of the review, focusing only on those recommendations that referred to the Cash Transfer Program. We interviewed USAID, State, Treasury, and Egyptian officials who were involved in negotiating the new financial sector MOU, and we reviewed funding data to determine the amount and proportion of program funds allocated to financial sector reform. To corroborate testimonial evidence provided by USAID, State, and Treasury officials, we also reviewed the USAID mission in Egypt’s updated strategic plan, revised in March 2004. Further, we consulted USAID officials regarding any legal issues to reprogramming Cash Transfer Program funds, and we reviewed documents showing USAID’s funding process. To understand how the USAID mission in Egypt plans to improve its performance monitoring system, we interviewed USAID officials and contractors who are responsible for developing it. We performed our work between August 2004 and May 2005 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the U.S. Agency for International Development letter dated June 15, 2005. 1. In our report we acknowledge the collaborative process between USAID and the government of Egypt to identify and agree on reforms (see pages 1, 2, 11). However, in reviewing program agreements and discussing the program with USAID officials, we found only one agreement that used the “menu approach”—that is, targeting activities that were worth more than the available cash transfer funds during the 12-year period of our review (1992-2004). We explain that the scope of activities we reviewed was limited to those targeted under the USAID program (see pages 2 and 26) and that Egypt completed some reform activities for which they did not receive program funds (see pages 3, 11). 2. USAID states that our finding that the Cash Transfer Program completed 70 percent of the targeted activities “is related in substantial part to the structure of the Cash Transfer Program, and not entirely to shortcomings in policy reforms.” As we note in comment 1 and on pages 3, 9, and 11 of the report, the structure that USAID refers to was in effect only from 1999 to 2003. Regarding USAID’s concern that presenting the percentage of agreed-on activities completed by Egypt suggests a shortcoming in Egypt’s progress in policy reform, our findings reflect the fact that USAID and the Egyptian government agreed on 196 reform activities during the period covered by our review and the Egyptian government completed 136 of those activities. 3. USAID states that our findings show that the revisions of target dates increased the U.S. government’s influence in accomplishing reforms. Although USAID’s provision of extensions may have allowed the Egyptian government to complete the reforms, we disagree that our findings show that revising the target dates increased the U.S. government’s influence, as USAID asserts. Rather, we believe that the practice weakened one of USAID’s tools of conditionality—deadlines. Additionally, the 2002 review by State and USAID pointed out the need to focus more tightly the conditionality of the Cash Transfer Program and indicated that “in the event that outcomes, benchmarks and timelines agreed with the Egyptian government are not met within a reasonable time of the originally agreed target dates the team agreed that DSP II funds will be reprogrammed to fund other USAID projects in Egypt.”
Since 1992, the U.S. Agency for International Development (USAID) has focused the Cash Transfer Program in Egypt on supporting economic reform activities to move Egypt toward a more liberal and market-oriented economy. USAID has provided funds to Egypt's government as it completed agreed-on economic reform activities. In fiscal year 2002, the Department of State and USAID conducted a review of U.S. economic assistance in Egypt that led USAID to renegotiate the program's terms. USAID and Egypt signed a new agreement in March 2005. GAO's review of the Cash Transfer Program focused on the program's disbursement of funds and Egypt's completion of agreed-on activities, factors affecting the program's influence on Egypt's economic reform, USAID's efforts to evaluate the program's impact, and USAID's changes to the program in response to the 2002 review by the Department of State and USAID. GAO received comments on a draft of this report from USAID. USAID stated that the draft was fair and clear but that Egypt's completion of about 70 percent of the activities resulted from the program's structure rather than shortcomings in Egypt's policy reforms. USAID also stated that extending the target dates for completing reforms increased U.S. influence in accomplishing reforms. Since fiscal year 1992, USAID's Cash Transfer Program has provided about $1.8 billion in economic assistance to the Egyptian government for completing reform-related activities, such as privatizing state-owned companies. USAID and Egypt have identified 196 reform-related activities, and Egypt has completed 136 of them (about 70 percent), primarily in the areas of finance and trade. Although the Cash Transfer Program supported Egypt's completion of reform activities, several factors have limited its ability to influence the Egyptian government to undertake certain reforms. First, the financial costs of certain reforms affected the Egyptian government's willingness to undertake them despite their potential benefits; although the Cash Transfer Program offsets some of those costs, its contribution to Egypt's overall budget is small. In addition, Egypt is cautious about undertaking reforms that may lead to domestic instability. Finally, USAID granted numerous extensions that allowed Egypt additional time to complete agreed-on activities, thus weakening the conditions tied to funding disbursement. Despite the difficulty of determining the impact of policy reform, USAID conducted two evaluations of Cash Transfer activities as well as a series of opinion surveys on the impact of certain activities supported by the program. Although these studies reported some positive results, GAO found limitations with some of the measures used to evaluate the activities' impact on Egypt's economy. In response to recommendations in the 2002 Department of State and USAID review, USAID (1) narrowed the Cash Transfer Program's focus to reforms in the financial sector, (2) will obligate funds when it is certain that the Egyptian government will complete activities rather than when the government agrees to undertake them, and (3) is improving its monitoring and evaluation system.
The NDAA for fiscal year 2009 initially authorized CIPP as a pilot program through December 31, 2012, establishing basic eligibility criteria for participants, providing guidelines for implementing the program, and establishing congressional reporting requirements. Specifically: For each calendar year from 2009 through 2012, up to 20 officers and 20 enlisted servicemembers per military service are authorized to leave active duty for a period not to exceed 3 years. For each month of sabbatical taken, servicemembers must complete two months of obligated service upon their return to active duty. Servicemembers who have completed their initial active duty service agreement and are not currently receiving a critical skills retention bonus are eligible to participate. During their sabbatical, all servicemembers are required to serve in the Individual Ready Reserve and are required to undergo such inactive duty training as shall be required by the Secretary involved in order to ensure that the servicemember retains sufficient proficiency in the military skills, professional qualifications and physical readiness. During sabbaticals, servicemembers receive two-thirtieths of their salary (i.e. 2-days pay per month) and maintain full health benefits for themselves and their dependents. In addition, DOD provides participants and their dependents with a paid relocation within the United States. For example, if servicemembers are taking a sabbatical to attend school, DOD will pay for them to move to the location of their educational program. At the end of the servicemember’s sabbatical, DOD will pay the costs to relocate the servicemember to his or her next assignment. The NDAA for fiscal year 2015 kept these NDAA fiscal year 2009 guidelines and extended the program, allowing for servicemembers to start sabbaticals through December 31, 2019, returning to active duty no later than December 31, 2022. Appendix I shows when each military service implemented CIPP and the number of participants approved by each service as of July 2015.The fiscal year 2015 NDAA also extended the date for DOD to provide a final report to Congress—from March 1, 2016, to March 1, 2023, and it added additional reporting elements. DOD is now required to report the following: A description of the number of applicants for the pilot program and the criteria used to select individuals for participation in the pilot program. An evaluation of whether the authorities of the pilot programs provided an effective means to enhance the retention of members of the armed forces possessing critical skills, talents, and leadership abilities. An evaluation of whether the career progression in the armed forces of individuals who participate in the pilot program has been or will be adversely affected; and the usefulness of the pilot program in responding to the personal and professional needs of individual members of the armed forces. A description of reasons why servicemembers choose to participate in the pilot. A description of the servicemembers, if any, who did not return to active duty at the conclusion of their sabbatical, and a statement of the reasons why these servicemembers did not return. A statement about whether servicemembers were required to perform training as part of their participation in the pilot program, and if so, a description of the servicemembers who were required to perform training, the reasons they were required to perform training, and how often they were required to perform training. A description of the costs to each military department of each pilot program. Recommendations for legislative or administrative action as the Secretary concerned considers appropriate for the modification or continuation of the pilot programs. Participation in CIPP has remained below statutorily authorized limits, and officials have identified factors that could be affecting CIPP participation, but DOD has not established a plan for evaluating whether CIPP is an effective means to retain servicemembers. The rate of DOD-wide participation in CIPP has been at less than half the authorized limit of 160 participants per calendar year, and officials from each of the services stated that factors including statutory requirements, service-specific limitations, military culture, and personal financial constraints could be affecting participation. Additionally, although DOD officials stated that they would like to make CIPP a permanent program, and the services are required to provide a final report to Congress on its effectiveness, costs, and retention not later than March 1, 2023, DOD has not established a plan for evaluating the effect of the pilot program on retention of servicemembers. Since Congress authorized CIPP in fiscal year 2009, participation has remained below authorized limits. As shown in figure 1, DOD is authorized to enroll up to 160 servicemembers per year in the program (up to 40 participants for each of the four services); but DOD-wide, the highest number of participants approved for CIPP was 76, in calendar year 2014. From 2009 through 2012, only Navy personnel were participating in CIPP, but in 2013, the Marine Corps approved its first applicant, and in 2014, personnel from all four services were participating in the pilot. Some of the services have had participation levels closer to the authorized limits. For example, in 2014, of the 76 participants approved, 30 were Navy and 35 were Air Force. However, the Army and Marine Corps were below authorized limits, with 9 servicemembers approved from the Army and 2 from the Marine Corps. Service officials identified four factors that may affect participation in CIPP—statutory requirements, service-specific limitations, military culture, and financial constraints. Statutory Requirements—According to the CIPP authorizing statute, servicemembers are not eligible to participate in the program during the period of their initial active duty service agreement or if they are currently receiving a critical skills retention bonus. These eligibility criteria reduce the population eligible to apply for CIPP. For example, according to Navy officials, as of July 2015, almost 134,000 Navy servicemembers were ineligible to participate in CIPP because they were in their initial active duty service agreement period. According to a DOD budget analysis document, the initial service agreement for a Navy sailor typically occurs from 18 to 33 years of age, when professional goals compete most strongly with personal goals such as family planning. For example, one participant who responded to our questionnaire stated that she used CIPP after completing her initial service obligation to start her family. However, she would have preferred to take a sabbatical during her initial service obligation period when she was younger. According to a DOD budget analysis document, for the Navy, retention at a servicemember’s first career reenlistment point is the most difficult to achieve. However, if servicemembers elected to participate in CIPP during their first service obligation period, they in effect would be electing retention during this critical timeframe. According to Navy officials, if these servicemembers were able to participate in CIPP, the CIPP-obligated service requirement would extend each servicemember’s existing period of obligated service, which could enhance retention. Another statutory requirement caps the annual number of participants at 40 (20 officers and 20 enlisted) servicemembers per service. An Army official and some Navy officials were of the opinion that the limitations on the number of participants may reduce participation, stating that servicemembers may be hesitant to apply since so few people were selected annually. Proposed language in the fiscal year 2016 NDAA, if enacted, would repeal the prohibition on participation in CIPP by servicemembers who are in their initial obligated service period or who are receiving a critical military skills retention bonus, and it would eliminate program participation caps. Service-specific limitations—Each military service has established selection processes and eligibility requirements that supplement the statutory requirements established by the NDAA for fiscal year 2009. For example, the Air Force rates applicants in various categories— such as job performance, leadership, experience, job responsibility, and education. As a result, according to Air Force officials, the most competitive applicants were prioritized for participation in CIPP, and less competitive applicants were disapproved for participation in the program. Further, service-specific guidance includes limitations on participation by servicemembers in certain career fields, such as Army medical personnel and some officers in the Navy Chaplain Corps and Judge Advocate General’s (JAG) Corps, as well as certain enlisted nuclear personnel. According to Navy and Air Force officials, additional career fields that require sustained proficiency (such as operating weapons systems or piloting aircraft), while not restricted from participation in CIPP, may have restrictions on breaks in service. For example, Navy officials stated that officers in the submarine community must receive a waiver to go longer than 3 years without a sea tour, and if officers exceed 5 years without a sea tour they can no longer work in the submarine community—this could occur if an officer took a 3-year sabbatical followed by a 3-year shore tour. According to Navy officials, if individuals in these communities participate in CIPP, measures are taken to ensure that they do not exceed timeframes that would result in the loss of their ability to serve in their community. Additionally, Navy and Air Force officials stated that pilots who do not have a minimum number of flight hours within a certain time period are no longer certified to operate their aircraft, and are required to complete additional training to be recertified. A Navy official stated that pilots are not disqualified from their position; however, additional training further extends the officer’s time out of operational service, which may affect the officer’s promotion potential. Military culture—Officials from each service also stated that participation may be influenced by military culture, and that servicemembers have the perception that a break in service may have a negative effect on upward advancement. Specifically, officials from all the services stated that servicemembers may not trust assurances as to how a break in service would be viewed by promotion boards. For example, one participant was concerned that “a break in service would be viewed as taking an off-ramp, an easy path, taking out of the fast lane,” but upon returning from sabbatical has been reassured by knowledge of other participants who have returned from their sabbaticals and received promotions. Another participant reported being “told explicitly by my chain of command [before entering the program] that my career would suffer”; and another reported that upon returning from sabbatical the servicemember would “meet people, sadly even some senior leaders, who are not familiar with the program and assume I have decided to prioritize family over career or assume I do not want to competitive for advancement.” CIPP authorizing language includes provisions designed to mitigate any potential negative effect of a sabbatical on career advancement, but according to Army and Navy officials, until more CIPP participants return from sabbaticals and demonstrate career advancement, servicemembers may be hesitant to participate. Financial constraints—The salary that servicemembers receive during the sabbatical period is equivalent to approximately 2 days of pay per month. Additionally, according to DOD policy, service members may not receive special or incentive pay or bonus payments while on sabbatical. Officials from the Army and the Navy stated that participation in CIPP likely will remain limited because servicemembers need financial resources to support themselves and their families during the sabbatical. One of the CIPP participants who responded to our questionnaire emphasized the need to have another source of income while participating. Another participant reported the opinion that CIPP “gives a options that are not available in any other program. However, the deal is not that great for the member—mainly because of the monetary hit. Since a member is coming back, I think it is possible to allow a person to receive some pay while participating in CIPP.” In February 2009, OUSD(P&R) issued a directive-type memorandum that authorized—but did not require—the Secretary of each military department to implement CIPP. According to the memorandum, if the services did implement CIPP, they were required to develop a method to evaluate the program. Specifically, the memorandum stated that the services should “have the appropriate oversight, analytical rigor, and proper evaluation methodologies” to evaluate the pilot. In September 2015, OUSD(P&R) reissued the memorandum and, among other things, included a requirement for each service to report to OUSD(P&R) annually on the status and effectiveness of the program. This report is to include information on the demographics of CIPP applicants, criteria used for selecting applicants, an assessment of the effectiveness of the program, and recommendations for legislative or administrative actions for the modification or continuation of the CIPP. However, neither DOD nor the services have developed a plan for evaluating the extent to which the pilot program is an effective means to retain servicemembers. The updated memorandum also clarifies DOD’s policy on servicemember benefits while on sabbatical and includes a requirement for each service to report to OUSD(P&R) on June 1st of each year on the program’s progression. More specifically, based on the revised guidance, beginning June 1, 2016, the services will be required to provide OUSD(P&R) an evaluation of whether: the authorities for CIPP provide an effective means to enhance the retention of participant servicemembers possessing critical skills, talents, and leadership; the career progression of participant servicemembers has been or will be adversely affected; and CIPP is useful in responding to the personal and professional needs of individual servicemembers. These reporting elements are also required in the services’ final report to Congress, due March 2023. Interim reports on the implementation and current status of the pilot programs are due in 2017and 2019. DOD has proposed expansion of the pilot, and the proposed fiscal year 2016 NDAA includes language that will remove the pilot’s participation cap and some restrictions on participation. Additionally, DOD officials stated that CIPP should be made available permanently; however, without an evaluation of the program, the basis for DOD’s proposed changes to the program is unclear. We have identified key features that should be included in pilot program evaluation plans, and along with private professional auditing and evaluation organizations, we have found that a well-developed and documented evaluation plan can help ensure that agency evaluations generate performance information needed to make effective program and policy decisions. Well-developed evaluation plans include key features such as: well-defined, clear, and measurable objectives; criteria or standards for determining pilot-program performance; clearly articulated methodology, including sound sampling methods, determination of appropriate sample size for the evaluation design, and a strategy for comparing the pilot results with other efforts; a clear plan that details the type and source of data necessary to evaluate the pilot, methods for data collection, and the timing and frequency of data collection; and a detailed data-analysis plan to track the program’s performance and evaluate the final results of the project. Although the services are required to evaluate the effectiveness of CIPP, currently they do not have any plans for evaluating the program. Without a plan for evaluating the pilot that includes these key features, there will be limited assurance that the evaluations conducted will provide the information needed to make decisions about the future of CIPP. Moreover, the establishment of a plan including key features such as well-defined, clear, and measurable objectives and standards for determining pilot-program performance may aid in addressing some of the challenges posed by the pilot’s timeline. Prior to the establishment of the June 2016 OUSD(P&R) reporting requirement, officials from all four services raised concerns about their ability to evaluate the effectiveness of the program so soon after implementation. Specifically, Marine Corps and Army officials stated that it is too early to determine the program’s effect on retention, and that it can take several years after a participant starts a sabbatical to determine whether the program contributed to retention. Marine Corps officials stated that if a participant took the maximum 3-year sabbatical followed by a 6-year obligated service period, it could take up to 9 years to determine whether the individual would decide to stay in the armed services beyond his or her period of obligated service. As of July 2015, of the 133 program participants, 5 have completed the obligated service period. Putting plans in place for how the pilot will be evaluated can guide the services on the data they need to collect as the pilot progresses, and can better position them to assess the pilot’s performance. According to Navy officials, CIPP has provided an option for the Navy to respond to the personal needs of servicemembers, and they believe the program has helped to retain servicemembers who otherwise might have left the military. Additionally, a DOD budget analysis document states that the Navy will retain a servicemember for a longer time period by using a combination of monetary and non-monetary incentives than would have been possible using only a single incentive. According to this document, in the Navy’s experience, financial incentives alone have not been adequate to retain certain categories of servicemembers, such as nuclear-trained surface warfare officers and senior nuclear-trained enlisted sailors serving on submarines and aircraft carriers. Navy CIPP participants have come from a range of career fields, including aviators, engineers, medical personnel, nuclear-trained surface warfare officers, and others. Navy officials stated that they are not using CIPP to address any specific critical skills, but that a servicemember’s occupation is given consideration during the CIPP approval process. According to the Navy’s 2011 interim report to Congress, CIPP applicants need certain qualifications, including a record of demonstrating strong and sustained performance in challenging positions, leadership, professional skills, resourcefulness, ability or potential to contribute to and succeed in the Navy, and exemplary personal behavior and integrity. For example, according to a Navy CIPP document, a Petty Officer Second Class was identified by the JAG Corps as a servicemember who displayed the aptitude, work ethic, and talent needed to serve as an attorney. This individual was encouraged to take a sabbatical to complete her degree, earn a Juris Doctorate, and apply for a commission in the JAG Corps upon return from the sabbatical. After a 36-month sabbatical this servicemember earned a commission in the JAG Corps and became an attorney in the Navy. In addition, officials stated that a career sabbatical may help to address the work-life balance that cannot be achieved through other human- capital programs. For example, one participant who responded to our questionnaire reported: “ believe provides a suitable option for work/life balance that helps offset goals/issues that cannot be addressed while on active duty and gives sailors an option besides getting out entirely.” In particular, officials stated that they are concerned that the Navy’s recurring sea-tour requirement may result in the loss of servicemembers with short-term personal needs or skill sets that are in demand in the private sector. For example, another respondent reported: “ is a great option for sailors who need to take a break from the arduous duty and demands of the Navy. Additionally, it can give sailors who are thinking about leaving the Navy the experience of what it is like to be in the civilian sector.” The Navy collects information from participants, both when they start their sabbatical and when they return, about the extent to which CIPP was a factor in the participants’ choice to stay in the Navy; whether participants intend to make the Navy their career; whether participants would recommend CIPP to other servicemembers’ and whether CIPP has negatively affected their career. Also, a Navy CIPP document provided examples of participants who fared well with their career milestones following their return to active duty. For example, according to CIPP program managers, one officer was selected for promotion following sabbatical, and two other officers were selected for administrative screening boards upon their return. Our questionnaire asked CIPP participants if since returning to active duty, they have been told or otherwise experienced something specific that indicated CIPP participation might affect their career advancement. The responses were mixed. We received examples expressing the view that use of a sabbatical for educational purposes was positive because the education received while on sabbatical was beneficial for career advancement. Conversely, there were negative examples reporting that Navy chain of command views the break in service as a “lack of commitment,” or “leaving community while others continued to work.” Congress authorized CIPP as a pilot program to help the services offer greater flexibility in career paths for servicemembers with the hope of increasing the retention of personnel with critical skills. All of the military services have implemented CIPP, and DOD officials have stated that the program should become permanent. Beginning in June 2016, the services will be required to evaluate and report annually on the effectiveness of the pilot. However, they do not have a plan to guide these evaluation efforts and help determine the extent to which the pilot program is an effective means to retain servicemembers. Without a plan that includes key features for evaluating CIPP’s value as a retention tool, DOD will be unlikely to determine the extent to which CIPP is achieving its intended purpose and thereby inform decision makers as to whether it should become a permanent program. To assist DOD in determining whether CIPP is meeting its intended purpose of enhancing retention and providing greater flexibility in the career path of servicemembers, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness, in collaboration with the service secretaries, develop and implement a plan to evaluate the pilot that includes key features such as well-defined, clear, and measurable objectives and standards for determining pilot-program performance. We provided a draft of this report to DOD for review and comment. In written comments, which are reprinted in their entirety in Appendix II, DOD concurred with our recommendation. DOD noted that they recognize the importance of developing well-defined measures to evaluate the effectiveness and utility of CIPP. DOD also provided technical comments, which we have incorporated in the report where appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Chairman of the Joint Chiefs of Staff, and the Secretaries of the military departments. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The Navy implemented CIPP in 2009, followed by the Marine Corps in 2013, and the Air Force and Army in 2014; as of July 2015, the services had approved 161 servicemembers to participate in CIPP. The Navy has approved the highest number of participants, and as of July 2015, 37 participants have completed sabbaticals and returned to active duty. Table 1 shows the number and demographics of CIPP participants for each military service. Navy—From 2009 to July 2015, 130 Navy servicemembers applied to participate in CIPP and 111 were approved, 11 were disapproved, 6 withdrew their applications before a final decision had been made, and 2 applications are pending. Of the 111 approved, 18 declined the offer. As of July 2015, 37 had completed sabbaticals. Of these 37, one separated before completing obligated service and 5 have completed their CIPP-related obligated service. Of these 5, one has since left active duty for the Navy Reserves, and one has since separated from the Navy. Participants used the program for several purposes, including pursuing higher education, supporting family (care for ailing parents or caring for young children), and staggering career timelines for dual-military spouses. Air Force—In 2014, 46 Air Force servicemembers applied to participate in CIPP and 35 applicants were approved (1 was removed from the program for quality reasons arising after selection to the program). Of the remaining 34 selected, 4 declined the offer, and 30 accepted. As of July 2015, 23 participants had begun a sabbatical. The Air Force disapproved 11 applicants because they did not meet basic eligibility requirements or, according to Air Force officials, did not have competitive performance ratings. Participants plan to use their sabbaticals to, among other things, pursue education, care for a family member or start a family, and realign assignment timing or date of rank with an active-duty spouse to facilitate joint spouse assignment. Army—In 2014, 10 Army servicemembers applied to participate in CIPP and 1 was determined to be ineligible due to remaining service obligation and 9 were approved. Of the 9 selected, 3 declined the offer in favor of other personnel actions, and 6 accepted. The 6 participants were expected to begin sabbaticals in summer 2015. Participants plan to use their sabbaticals to pursue higher education, address family and medical issues, travel, and align assignment cycle with an active-duty spouse. Marine Corps—In 2013, 3 Marines applied and were approved, but one subsequently withdrew the application. In 2014, 2 applied and were approved, but one withdrew. In 2015, 2 applied, 1 was accepted, and 1 was determined to be ineligible. As of July 2015, 3 of the 4 total participants were on sabbatical. Applicants requested the sabbaticals to move with a spouse and attend graduate school, to focus on family and children, or to attend seminary. In addition to the contact named above, Kimberly Seay (Assistant Director), Vijay Barnabas, Tim Carr, Amie Lesser, Felicia Lopez, Richard Powelson, Tida Reveley, and Michael Silver made major contributions to this report.
Congress authorized CIPP in 2009 to provide greater flexibility in career paths for servicemembers and to enhance retention. CIPP allows servicemembers to take sabbaticals of up to 3 years in exchange for 2 months of obligated service for each month of sabbatical taken. The Navy is the only service to have participants who have completed sabbaticals. Senate Report 113-211 included a provision for GAO to examine CIPP, and particularly the Navy's experience with it. This report (1) evaluates the extent to which participation in CIPP has reached authorized participation limits and DOD has developed a plan for evaluating whether the program is an effective means to retain servicemembers; and (2) describes the Navy's reported experience with CIPP as a tool for aiding retention by providing career flexibility. GAO reviewed CIPP legislation and implementation guidance, interviewed DOD and service officials responsible for CIPP, and compared the information obtained against key features of pilot evaluation plans such as clear, measurable objectives and standards for determining pilot-program performance. GAO also reviewed Navy efforts to implement CIPP and, using a GAO-developed questionnaire, collected information from Navy CIPP participants who had completed their sabbaticals. Participation in the Department of Defense's (DOD) Career Intermission Pilot Program (CIPP)—a pilot program expiring in 2019 that allows servicemembers to take up to a 3-year break in service in exchange for a period of obligated service when they return—has remained below statutorily authorized limits, and officials have identified factors that could be affecting CIPP participation, but DOD has not developed a plan for evaluating whether CIPP is an effective means to retain servicemembers. DOD-wide participation in CIPP has been at less than half the authorized limit of 160 participants—up to 40 participants for each of the four services—per calendar year (see figure below). Service officials stated that factors affecting participation include statutory requirements, such as eligibility criteria, and military culture, among others. CIPP-authorizing legislation and DOD guidance require the services to report on the effectiveness of the pilot, including effect on retention and program costs; however, neither DOD nor the services have developed a plan for evaluating the pilot program. GAO has reported that a pilot program should have a well-developed and documented evaluation plan, including key features such as well-defined, clear, and measurable objectives and standards for determining pilot-program performance. Moreover DOD has proposed expansion of the pilot, and officials stated that CIPP should be made available permanently. However, the basis for these proposals is unclear, and without a well-developed plan for evaluating the pilot, there will be limited assurance that the evaluations conducted will provide the information needed to make decisions about the future of CIPP. Total Number of Participants Approved to Participate by All Military Services for Calendar Years 2009 through July 2015 According to Navy officials, CIPP has provided an option for the Navy to respond to the personal needs of servicemembers, and they believe it has helped to retain servicemembers who otherwise might have left the military. CIPP participants also provided GAO with examples of how the program allowed them to address work-life balance challenges, such as managing deployment schedules and caring for family, that could not be achieved using other options. GAO recommends that DOD develop and implement a plan to evaluate whether CIPP is enhancing retention. DOD concurred with GAO's recommendation.
The purpose of SORNA is to protect the public from convicted sex offenders and offenders against children by providing a comprehensive set of sex offender registration and notification standards. These standards require convicted sex offenders to register prior to their release from imprisonment or within 3 days of their sentencing, if the sentence does not involve imprisonment. Further, the standards require these offenders to register and keep the registration current in the jurisdictions in which they live, work, and attend school. SORNA implementing jurisdictions are to maintain a jurisdiction-wide sex offender registry and website and adopt registration requirements that are at least as strict as those SORNA established. Convicted sex offenders are required to continue to update or verify their registration information, the duration and frequency of which depends on the seriousness of the crimes they committed. For example, SORNA requires a lifetime registration for convicted offenses in the most serious class such as aggravated sexual abuse (tier III), a 25-year registration for many felony sex offenses or sexual exploitation crimes with minors (tier II), and a 15-year registration for convicted offenses that do not support a higher classification such as possession of child pornography (tier I). SORNA requires in-person appearances depending on the convicted sex offender’s tier classification (tier I annually, tier II semiannually, and tier III quarterly) at established registration locations to update or verify registration information. Under the act, implementing jurisdictions are to submit basic identifier information for each convicted sex offender, such as the offender’s name, Social Security number, and address to the FBI’s National Sex Offender Registry (NSOR), which is a subfile of the National Crime Information In addition, these jurisdictions are to submit the sex Center (NCIC). offender’s fingerprints to the Integrated Automated Fingerprint Identification System (IAFIS), the sex offender’s palm prints to the National Palm Print System (NPPS), and a DNA sample to the Combined DNA Index System (CODIS).authorized state CJIS Systems Agencies (CSA) to determine which According to FBI officials, FBI CJIS has entities within their states, including tribes, can access these federal criminal databases. The implementing jurisdictions are also to provide information from the registry about the sex offender to (1) appropriate law enforcement agencies (including probation agencies), and each school and public housing agency, in each area in which the individual resides, is an employee, or is a student; (2) each jurisdiction where the sex offender resides, is an employee, or is a student; (3) each jurisdiction from or to which a change of residence, employment, or student status occurs; and (4) any organization, company, or individual who requests such notification, among others. Under SORNA, 353 of the 566 federally recognized tribes are ineligible to implement the act, while 214 tribes are eligible. SORNA based tribes’ eligibility on whether the tribe is subject to the criminal jurisdiction of a state under section 1162 of title 18, United States Code, making those tribes that are subject to such jurisdiction ineligible to implement the act.Section 1162 gives states criminal jurisdiction over offenses committed by or against Indians in the areas of Indian country within 6 states—Alaska, California, Minnesota, Nebraska, Oregon, and Wisconsin.tribes are generally ineligible to implement SORNA for tribal lands located within these states. The states are responsible for incorporating these tribal lands in state-wide SORNA implementation efforts. SORNA provided tribes 1 year from its July 27, 2006, enactment, to choose—by resolution or other enactment of the tribal council or comparable governmental body—to either retain the tribe’s SORNA implementation authority and act as a registration jurisdiction or delegate the tribe’s registration and notification functions to the state. however, is automatic for a tribe that did not affirmatively elect to implement the act itself prior to July 27, 2007. SORNA authorizes tribes that retained their implementation authority to enter into cooperative agreements with states and decide which functions to maintain or delegate to the state. The Attorney General can also delegate a tribe’s SORNA registration and notification functions to a state if the Attorney General determines that the tribe has not substantially implemented the requirements of SORNA and is not likely to become capable of doing so within a reasonable amount of time. 42 U.S.C. § 16927(a). assistance and grant funds, and administer the standards for determining whether jurisdictions have implemented the law. Eligible jurisdictions submit a substantial implementation package that outlines their implementation efforts for SMART Office review. The package can include tribal laws pertaining to sex offender registration administrative policy and procedures, and the jurisdiction’s public sex offender website, among other things. The SMART Office developed the SORNA Substantial Implementation Checklist tool (described in app. II) that jurisdictions can use to prepare the package. After reviewing the package, the SMART Office determines whether the jurisdiction has “substantially implemented” or “not substantially implemented” the minimum requirements of SORNA. To do so, the SMART Office must follow the standards set forth in the (1) act; (2) SORNA National Guidelines, issued in July 2008; and (3) Supplemental Guidelines for Sex Offender Registration and Notification (Supplemental Guidelines), issued in January 2011.standard does allow for some latitude, and accordingly, the National Guidelines require the SMART Office to consider, on a case-by-case basis, whether jurisdictions’ rules or procedures substantially implement SORNA. Pursuant to SORNA, DOJ has designated the U.S. Marshals Service as the lead federal agency in three key missions: to assist state, local, tribal, and territorial authorities in the location and apprehension of noncompliant convicted sex offenders; to investigate violations of the criminal provisions of the act; and to identify and locate convicted sex offenders displaced as a result of a major disaster. USMS has designated a senior inspector in each of its 94 district offices to carry out the agency’s SORNA responsibilities. Within the Department of the Interior (Interior), the Bureau of Indian Affairs is responsible for supporting tribes in their efforts to ensure public safety and administer justice within their reservations, as well as to provide related services directly or to enter into contracts or compacts with federally recognized tribes to administer the law enforcement program.major federal crimes, including sexual offenses, committed on, or involving, Indian country. OJS provides oversight and technical assistance to tribal law enforcement programs. As of August 2014, 164 of the 214 (nearly 77 percent) eligible tribes had retained their authority to implement SORNA, while the remainder did not retain their authority because they either elected to delegate their authority to a state (24 tribes) or the SMART Office delegated their authority to a state (26 tribes), as shown in figure 1. According to SMART Office representatives, the agency delegated the SORNA implementation authority of 26 tribes primarily because the tribes (1) chose not to delegate their authority although they did not plan to implement the act, because they thought delegating their authority was equivalent to relinquishing their sovereignty; (2) indicated that they lacked the necessary resources to implement the act in a reasonable amount of time; or (3) were nonresponsive to the SMART Office’s repeated inquiries to gauge the tribes’ interest in implementing the act. Our interviews with tribal council and law enforcement officials from 2 of the 24 tribes that elected to delegate their authority to a state similarly revealed that these tribes chose to delegate their authority because they did not have the necessary resources to implement the act. Furthermore, officials from 1 tribe stated that implementation did not seem worth the expense for their tribe considering the small number of tribal members, the fact that none of the tribal members live on tribal land year-round, and the remote location of their tribal land. According to the SMART Office, 43 percent (71 of 164) of tribes that retained their authority to implement SORNA have substantially implemented the act; nearly 43 percent (70 of 164) have submitted an implementation package, but the SMART Office has not yet made a determination; and 13 percent (22 of 164) have not submitted a complete package. The SMART Office has determined, to date, that 1 tribe has not substantially implemented SORNA, as shown in figure 2 and appendix III. For the implementation status and location of all eligible tribes, see figure 3. According to the SMART Office, the number of tribes that have submitted implementation packages for review as well as the number of tribes that have substantially implemented SORNA continue to increase. Representatives of two of the three tribal and law enforcement associations we interviewed in January 2014 said that the number of tribes that had substantially implemented SORNA—at the time, 49 tribes—was far greater than what these representatives said that they expected, given the magnitude of the implementation challenges that tribes face, such as limitations with information sharing, a high staff turnover, and the lack of cooperation between tribes and local law enforcement. We discuss such challenges and steps to address them in detail later in this report. According to the tribes that have submitted an implementation package for review that responded to our survey, it took a majority of these tribes 2 years to submit a complete implementation package to the SMART Office for review. SMART Office representatives said that it takes the office, on average, 6 months from the time it receives a tribe’s package to begin reviewing it. This is because the office’s review depends on a number of factors, such as the availability of SMART Office staff to review implementation packages. At the time of our review, SMART Office representatives indicated that there were three policy advisers on staff to review implementation packages. Additionally, the time it takes for the SMART Office to determine a tribe’s implementation status depends on how quickly the tribe can address any issues the SMART Office identifies in its review and can be affected by factors such as the frequency with which a tribal council meets, and thereby can address the office’s questions. Nineteen of the 22 tribes that have not submitted an implementation package responded to our survey and provided a variety of reasons why they have not done so, as well as an estimate of when they expect to do so. Specifically, 7 of the 19 tribes did not submit a package because they are in the process of amending their tribal codes, 4 tribes indicated that they needed additional time to complete and submit their packages, and 3 others either currently have agreements or are in the process of entering into agreements with state and local law enforcement agencies to assist with SORNA implementation. Fourteen of the 19 tribes plan to submit a package by the end of calendar year 2014, 2 tribes expect to submit a package in calendar year 2015 or later, and 3 tribes are uncertain about when they will submit a package. SMART Office representatives stated that the office has granted each of these 22 tribes additional time to submit its package. SMART Office representatives said that they have not set a final deadline for tribes to substantially implement SORNA and that the office will continue to work with every tribe that shows interest and is working on implementing the act. About 76 percent of tribes that retained their authority to implement SORNA reported experiencing at least one major or minor challenge to implementing the act. These tribes most frequently reported inability to submit convicted sex offender information to NCIC and NSOR as a major challenge. DOJ and BIA, as well as state and local law enforcement agencies, have taken actions to help address these challenges. However, additional steps to ensure that states notify tribes about registered sex offenders who plan to live, work, or attend school on tribal lands upon release from state prison, and to identify what, if any, assistance tribes may need in order to implement SORNA, would help further address tribes’ challenges. Tribal officials from 98 of the 129 tribes (76 percent) that retained their authority to implement SORNA and responded to our survey questions on challenges to implementing SORNA reported that their tribes experienced As shown at least one major or minor challenge to implementing the act.in figure 4, the major challenges tribes most frequently reported included inability to submit convicted sex offender information to NCIC or NSOR; lack of notification from state prisons about sex offenders who indicate that they plan to live, work, or attend school on tribal lands; insufficient staff; and inability to cover costs to implement SORNA or maintain the tribe’s registry. Under SORNA, as well as DOJ’s National Guidelines for SORNA Implementation, jurisdictions are to provide convicted sex offender information for inclusion in NSOR, which is part of NCIC. However, 39 of the 129 (about 30 percent) tribes that responded to our survey reported difficulties meeting this requirement as a major or minor challenge to SORNA implementation. These tribes, as well as the federal, state, and tribal sex offender registry and law enforcement officials we interviewed, cited reasons why some tribes do not submit convicted sex offender information to NCIC or are able to do so only with the assistance of other state and local law enforcement agencies. Cost of NCIC access. Ten tribes that responded to our survey reported that they cannot cover the associated costs for NCIC access. According to federal, state, and tribal officials, these costs can include purchasing servers or NCIC terminals, the cost of the required high- speed Internet connection, or the fees required to obtain NCIC access. Requirements for submitting information to NCIC. Thirteen tribes reported that they do not meet certain requirements for submitting information to NCIC. CJIS has established requirements that all law enforcement agencies seeking to submit information to federal criminal justice databases must meet in order to ensure the quality and the security of criminal justice data entered into these databases. For example, some tribes do not meet CJIS requirements because they do not have a criminal justice agency (police force, court, etc.) or an officer available 24 hours, 7 days a week to respond to any inquiries about persons or property included in NCIC. State statutes or policies. Twelve tribes reported that state statutes or policies prevent them from submitting to NCIC through the states’ criminal justice information systems. According to FBI officials, CJIS Systems Agencies determine which entities within their states, including tribes, can access these federal criminal databases through state systems. However, some states, such as New York, have statutes or policies prohibiting CSAs from granting tribal law enforcement entities access to their state switches and, therefore, to NCIC through such switches. According to DOJ officials, the public safety benefit of including convicted sex offender information in NSOR is that if a law enforcement officer comes into contact with an individual, such as through a traffic stop or an arrest, the officer can query NCIC and learn that the individual is a convicted sex offender and potentially determine if the offender is in compliance with registration requirements. The officer can also enforce any requirements that the officer’s jurisdiction may have with regard to convicted sex offenders. For example, according to tribal officials, some tribal jurisdictions banish convicted sex offenders from their lands or prohibit offenders from living or working within proximity to schools or day care centers. Federal and state, as well as local, law enforcement agencies have undertaken efforts to help address such barriers, although the efforts have some limitations, as discussed below. Memorandums of agreement (MOA). Federal, state, and local law enforcement agencies have entered into memorandums of agreement with tribes, whereby these law enforcement agencies generally agree to enter convicted sex offender information into NSOR on behalf of the tribes. For example, 28 of 129 (about 22 percent) tribes that responded to our survey reported that county or local law enforcement enters information in NSOR for their tribe, while another 7 tribes indicated that BIA enters this information for them. However, the DOJ and BIA officials and representatives from the National Criminal Justice Association and the National Congress of American Indians that we interviewed noted that not all tribes have good relationships with state or local law enforcement agencies, and that these agencies may not be willing to assist the tribes. In addition, according to DOJ, tribal, and local law enforcement officials, when another law enforcement agency enters information into NSOR on behalf of a tribe, the information is often not associated with the tribe because state or local law enforcement agencies frequently use their own Originating Agency Identifier number (ORI), as opposed to the tribe’s number, in NSOR. As a result, the convicted sex offender appears in NSOR as having registered with the entity that submitted the information as opposed to the tribe. Representatives we interviewed from the SMART Office and CJIS, as well as local and tribal law enforcement, stated that these data entry practices can also lead to imprecise data in NSOR. For example, at our request, CJIS identified 22 tribes as having a total of 247 registered sex offenders in NSOR. However, 120 of 129 tribes that responded to our survey reported that they had a total of 2,167 registered sex offenders on their lands. According to CJIS officials, this discrepancy should not pose a public safety threat because an officer querying an individual in NCIC will know immediately if the individual is a convicted sex offender. Further, CJIS and USMS officials said that the FBI and USMS generally do not use NSOR as their primary source for informing decisions, such as funding or resource allocation for sex offender related operations. Collaboration to remove state barriers to tribes’ NCIC access. The SMART Office has collaborated with state and tribal officials in three states to successfully address state policies preventing tribes in these states from submitting information to NSOR through the state criminal justice database. SMART Office representatives told us that until recently, tribes in Arizona and Washington were unable to submit convicted sex offender information to NSOR because of various state statutes or policies. SMART Office representatives reportedly worked closely with state and tribal officials to provide alternative conduits that will allow tribes to submit convicted sex offender information directly to NSOR. SMART Office representatives also reported that they have tried to persuade officials in at least 2 other states to implement similar solutions, but without success. Justice Telecommunications System (JUST). DOJ, with the assistance of the Office of Tribal Justice and the Justice Management Division (JMD), established JUST in 2010 as a pilot project to provide tribes that meet FBI requirements with NCIC access. OTJ representatives said that DOJ provided $1 million for the pilot under the Community Oriented Policing Services (COPS) Office and has used approximately $250,000 to provide 21 tribes with NCIC access—6 of which are eligible to implement SORNA. According to OTJ, all tribes that have requested NCIC access through JUST thus far have received access. OTJ representatives said that the JUST funding covers tribes’ costs for access fees as well as other licensing and transaction fees; however, tribes will have to pay these fees on their own once the remaining $750,000 for JUST runs out. According to OTJ and JMD representatives, for JUST to become a viable and long- term solution for tribes’ lack of NCIC access, DOJ will need to identify and secure a more sustainable source of funding. Identifying sustainable sources of funding, according to these officials, is one of the goals of the Tribal Public Safety Working Group, discussed below. The Office on Violence Against Women, a component of DOJ, is responsible for providing federal leadership in developing the national capacity to reduce violence against women and administer justice for and strengthen services to victims of domestic violence, dating violence, sexual assault, and stalking. The Office of Justice Programs, within DOJ, provides leadership to federal, state, local, and tribal, justice systems, by disseminating state-of-the art knowledge and practices on crime-fighting strategies across America, and providing grants for the implementation of these strategies. May 12, 2014, and has established both short-term and long-term goals. In the short term, OTJ representatives said that by using information from outreach efforts and other existing sources, the working group plans to determine the extent to which each tribe has or wants access to all federal criminal databases, and the unique solution based on the circumstances of each tribe. OTJ representatives reported that the working group’s long-term goals include identifying sustainable funding sources; identifying best practices for addressing specific barriers; evaluating potential systematic solutions, such as establishing a federal CSA for tribes; and reviewing federal rules and regulations to identify those that should be revised to remove any regulatory barriers to tribes’ access to federal criminal databases. The Tribal Public Safety Working Group’s plans and initial activities are in line with addressing the types of barriers to NCIC access that tribes identified. However, given that the Tribal Public Safety Working Group is in its early planning phase, it is too early to evaluate how its efforts will help provide tribes with access to NCIC. Of the 129 tribes that responded to our survey, 41 (approximately 32 percent) reported that their state prisons had not notified them when the prisons released a sex offender to the tribes’ lands. In addition, about half of these 41 tribes reported that lack of notification when offenders are released from state prisons was a major challenge to SORNA implementation, with some consequences. Tribal officials that we interviewed and that responded to our survey reported instances of offenders on tribal lands that may not be monitored or registered as required because they had not been notified of the offenders’ release from prison.he learned of a convicted sex offender with an extensive criminal record who was released from prison and lived on the tribe’s land from anonymous letters the official received. He said that because his tribe received no notification from state prison officials about this sex offender, the sex offender had lived on the tribe’s territory unmonitored for over a year. For example, one tribal law enforcement official told us that A consequence of not being notified about registered sex offenders who plan to live, work, or attend school on tribal land upon release from state prisons that tribal officials highlighted was that they may not be able to enforce their own laws governing the extent to which convicted sex offenders can live, work, or attend school in their communities. For example, a state tribal liaison officer told us that 9 of the 29 tribes in his state have laws that ban all sex offenders or restrict the types of sex offenders allowed to live on the tribes’ lands. Similarly, officials from 2 tribes we interviewed said that their tribes either did not allow convicted sex offenders to live on the tribes’ lands or had more stringent laws than states regarding the proximity of a convicted sex offender’s residence to a school. Absent notification from states about registered sex offenders who plan to live, work, or attend school on tribal lands upon release from state prison, tribal authorities would not be aware of the presence of convicted sex offenders on their lands and therefore would be unable to enforce tribal law. Under SORNA, initial registration of a convicted sex offender is to take place prior to the offender being released from prison, and the registration jurisdiction must immediately forward the offender’s registration information to any other jurisdiction in which the sex offender is required to register. However, according to the SMART Office, in many states, the state prison is not a SORNA registering entity and therefore does not register the sex offender or notify other relevant registration jurisdictions prior to the offender’s release. In these instances, the sex offender’s initial registration may not take place until after the offender is released from prison and through a state registering entity, such as a county sheriff’s office. In instances when a sex offender informs a state registering entity—whether that be the state prison, a county sheriff’s office, or another agency—that he or she plans to live, work, or attend school on tribal land, under SORNA, notification must be sent to tribes that retained their SORNA implementation authority. State prison and registry officials who responded to our requests for information provided two reasons why their states do not notify tribes when a registered offender indicates that he or she will be living, working, or attending school on the tribe’s land upon release from prison. In some cases, resolving these barriers may be difficult, but in other cases, there are promising practices that could help states to address barriers to notifying tribes. First, 10 of the 20 states that responded to our inquiry said that they do not have laws or policies that require that tribes be notified.them, 22 are located in these states. In the absence of state laws or policies regarding notification to tribes, states may leave it up to the discretion of local law enforcement agencies, which may conduct the initial registration of the sex offender, to decide whether they will notify tribes. However, as discussed earlier, several federal, state, and tribal officials we interviewed acknowledged that not all tribes have cordial relationships with their local law enforcement organizations. As a result, this method of notification may not be effective for all tribes. Of the 41 tribes that reported that state prisons had not notified Second, it can be difficult for state prisons, in particular, to determine if the address that the offender has identified as that person’s residence is located on tribal lands. According to state, local, and tribal officials we interviewed, unlike other jurisdictions, such as cities or counties, tribal lands are not always clearly identifiable by a ZIP code or even a city name. However, the Federal Bureau of Prisons manages this challenge by asking the offender whether the address is located on tribal lands as part of the prison release procedures. In regard to federal notification requirements, officials from the Bureau of Prisons said that they have removed law-enforcement-sensitive information from their notifications. As of May 2014, they can now send prisoner release notifications to both tribal law enforcement and tribal sex offender registry officials to ensure that all tribes are notified when a sex offender is released from federal prison, as required by federal law. To assist jurisdictions to develop, organize, and submit their substantial implementation package materials, the SMART Office developed the SORNA Substantial Implementation Checklist tool. To complete the checklist, states must identify the specific statute or regulation that is to fulfill each SORNA requirement, including forwarding sex offender information to other relevant SORNA jurisdictions. The SMART Office also requires that states that have a tribe or tribes eligible to implement SORNA located within their boundaries provide MOAs or other information-sharing agreements to facilitate notification of sex offender release. For states that have substantially implemented SORNA, the SMART Office uses this information as the basis for an additional section titled “Tribal Considerations” in these states’ Substantial Implementation Review reports. Office with plans for how they would notify these tribes until the tribes implemented SORNA themselves. Also, according to SMART Office representatives, under the flexibility built into SORNA, the SMART Office may determine that a state has substantially implemented the law if lack of notification to tribes was the only area of noncompliance. This is because even though the tribe may not be aware of a convicted sex offender living on its land, it is likely that state or local law enforcement officials have registered and are aware of the sex offender, and can take steps to monitor the offender. However, the SMART Office still expects substantially implemented states to notify tribes following registration of a sex offender who plans to live, work, or attend school on tribal land. The SMART Office has opportunities to continue to work with substantially implemented states to ensure that they meet this notification requirement. According to SMART Office representatives, the office conducts annual compliance reviews to evaluate the extent to which substantially implemented jurisdictions remain in continued compliance. The office issued its substantial implementation assurance guidance that details the types of information that a SMART Office evaluator is to request when conducting these annual reviews, such as whether the state has signed any new MOAs with tribes. However, the guidance does not require that the evaluator request information on whether and how the state, or a designated agency within the state, notifies tribes following registration of a convicted sex offender who will be, or has recently been, released from state prison and indicates that he or she plans to live, work, or attend school on tribal land. By amending its substantial implementation assurance guidance, the SMART Office could better ensure that substantially implemented states are in compliance with SORNA requirements to notify tribes that retained their authority, thereby enabling these tribes to enforce their own laws pertaining to convicted sex offenders. The SMART Office is also in a position to encourage states that have not yet substantially implemented SORNA to make efforts to implement certain provisions of the law, such as the requirement to notify tribes. This is particularly important considering that we found that 7 of 12 nonimplementing states that have tribes that retained their SORNA authority and that responded to our inquiry do not notify tribes. Furthermore, 22 of the 41 tribes that reported state prisons had not notified them are located within these 7 states. The SMART Office has provided guidance and conducted outreach to encourage tribes, once they take responsibility for registering convicted sex offenders, to coordinate with their respective states to ensure that the tribes are notified about sex offenders who plan to live, work, or attend school on their lands upon release from state prison. According to SMART Office representatives, the agency has issued a policies and procedures manual that calls for tribes to, among other things, establish procedures that ensure that the tribe receives notification about a sex offender who is released from a state, county, or local jail, and has indicated that he or she will be living, working, or attending school on tribal land. SMART Office representatives also reported that they have widely encouraged tribes, as a best practice, to contact relevant state, county, and local officials to ensure that these officials are aware that the tribe is a registering jurisdiction and the officials have the tribe’s current information on who should receive notifications. SMART Office representatives reported conducting some outreach to states as part of the substantial review process, but acknowledged that more could be done to encourage states to provide notification to tribes. SMART Office representatives said that, prior to our review, they were not aware of tribes’ concerns that states were not notifying tribes about sex offenders who were released from prison and planned to live, work, or attend school on tribal lands. Therefore, these representatives did not think it was necessary at the time to provide any additional guidance to the states. Nevertheless, given our survey results, it may be beneficial for the SMART Office to expand upon its existing outreach efforts to encourage states to develop mechanisms to notify tribes that retained their SORNA implementation authority. SMART Office representatives told us that they often reach out to law enforcement and sex offender registry officials located in both implementing and nonimplementing states and that officials from these states attend various SORNA training events and conferences that the SMART Office sponsors. The representatives agreed that they could use these existing outreach efforts to educate states on the importance of, and best practices for, notifying tribes about sex offenders who have indicated that they plan to live, work, or attend school on tribal lands upon release from state prison. In doing so, the SMART Office will help ensure that implementing, as well as nonimplementing, states have the information they need to effectively notify tribes, and that tribes have the information they need to identify and monitor convicted sex offenders who live, work, or attend school on their lands, and to enforce their laws governing the extent to which convicted sex offenders can live, work, or attend school in their communities. Insufficient staff and difficulty covering the costs of SORNA implementation were each cited by 21 of 129 (16 percent) tribes as major challenges. Several tribal and federal officials, as well as subject matter experts that we interviewed, also reported that insufficient staff and the costs of SORNA implementation were major challenges that tribes faced when implementing SORNA. For example, BIA and SMART Office representatives and representatives from the National Congress of American Indians told us that because the majority of tribes did not have sex offender registries prior to SORNA, many tribes have not had the necessary resources for conducting sex offender registration, such as office space, staff, and equipment, and therefore would have had to incur significant costs to acquire these resources. In addition, the sex offender registry official from a tribe that has substantially implemented SORNA reported that his tribe spent approximately $173,000 over a period of 2 years to fund his salary, purchase the information technology infrastructure required to set up the tribe’s sex offender registry, and acquire a fingerprint scanner. In addition, according to BIA officials and subject matter experts staff turnover can result in large lapses in some tribes’ SORNA implementation programs, when staff often leave their positions without providing documentation of past efforts toward implementing the law, requiring incoming staff to start the tribe’s implementation efforts anew. For example, according to representatives from the National Criminal Justice Association, 1 tribe has experienced up to six turnovers for its SORNA coordinator position. DOJ and BIA reported the following as ways in which their agencies have provided tribes with financial assistance, equipment, and staff to help address challenges related to limited resources: The SMART Office provided $23.69 million in Adam Walsh Act Implementation Grant Program (AWA) funds to 125 tribes from fiscal years 2008 to 2013 and $1.92 million in Edward Byrne Justice Memorial Discretionary Grants Program funds to 17 tribes in fiscal year 2008 to assist with SORNA implementation. The SMART Office also has provided tribes with training and technical assistance related to SORNA implementation as well as numerous tools, including a customizable web-based tool—the Tribe and Territory Sex Offender Registry System (TTSORS)—for tribes to use to set up their sex offender registries and public websites. USMS has provided manpower and equipment to assist tribes with conducting operations to ensure convicted sex offenders are complying with their sex offender registration requirements. BIA OJS assisted 19 tribes with SORNA implementation tasks, such as providing input on the tribes’ draft sex offender codes, assisting tribes with installing or configuring the tribes’ registry systems (i.e., TTSORS), and assisting tribes with identifying and implementing alternative means of submitting information to federal criminal databases. In our survey, 7 tribes reported receiving assistance from BIA, 33 from USMS, and 69 from the SMART Office and almost all of the tribes that reported they received assistance from these three agencies characterized the agencies’ assistance as very useful or moderately useful. In addition, tribal leaders we interviewed reported that Byrne and AWA grants have assisted tribes with offsetting the cost of SORNA implementation. Tribal officials responsible for implementing SORNA for 5 tribes reported that their tribes received AWA grant funds and had used the funds to cover various implementation expenses including hiring additional staff, purchasing necessary equipment, and covering fees to query or submit information on convicted sex offenders to federal criminal databases (20 of the 26 tribes that reported challenges with covering SORNA implementation costs and insufficient staff had received AWA grant funds). A number of tribes that responded to our survey reported that they would like additional assistance from each of the three federal agencies. Specifically, 40 tribes reported that they would like additional assistance from the SMART Office, 30 tribes reported that they would like additional assistance from BIA, and 22 tribes reported that they would like additional assistance from USMS. However, most of the tribes that reported wanting additional assistance from SMART and USMS had already received assistance from these two agencies, and found the assistance to be useful. On the other hand, most of the tribes that reported wanting additional assistance from BIA reported that they had not received any prior assistance from BIA and raised concerns about this. Specifically, 26 tribes that responded to our survey indicated that either BIA did not offer assistance to them at all (14 tribes), they were unsure of the assistance BIA could offer (6 tribes), or that BIA did not provide the assistance they requested (6 tribes). Tribes did not report that the SMART Office or USMS had not offered them assistance or had refused their request for assistance. Unlike the SMART Office and USMS, BIA does not have an explicit role for SORNA implementation. However, BIA is statutorily responsible for enforcing federal law and, with the consent of the tribe, tribal law in Indian country. Further, the agency is responsible for developing and providing training and technical assistance to tribal law enforcement, and for consulting with tribal leaders to develop regulatory policies and other actions that affect public safety in Indian country— responsibilities that are related to tribes’ implementation of SORNA. Moreover, BIA serves as the law enforcement agency for the 40 direct service tribes, 33 of which have retained their authority to implement SORNA. According to BIA OJS officials, BIA OJS districts have reached out to select tribes that retained their SORNA implementation authority. However, these officials acknowledged that BIA OJS has not made a concerted effort to offer assistance to all tribes that retained their authority because, according to a senior BIA OJS official, BIA’s initial outreach was limited only to tribes that were on a list the agency had received from the SMART Office of tribes that had elected to implement SORNA at the time, and therefore did not include all tribes that subsequently retained their authority to implement. As of July 2013, BIA reported that it had contacted each of the 33 direct service tribes that retained their SORNA implementation authority to determine the tribes’ implementation status and what, if any, assistance the tribes would like to have from BIA. BIA OJS reported that the agency provided some form of assistance to 19 of the 33 tribes that the agency contacted. Specifically, BIA OJS reported that it had provided office space to 2 tribes, purchased a fingerprint scanner for another tribe, and assigned officers to 5 tribes to implement or perform some or all aspects of the tribes’ sex offender registry and notification systems, and provided technical assistance to the remaining 14 tribes. BIA OJS’s officials stated that some tribes refused OJS’s offer and others did not respond at all to the offer. Subsequently, BIA OJS officials reported that the agency conducted another round of outreach to tribes in preparation for a meeting with GAO, in August 2014. However, BIA reached out to only the 33 direct service tribes that had retained their authority but had not yet implemented SORNA and did not contact the remaining 131 tribes that have retained their authority to implement SORNA—71 of which have not yet implemented the law. Furthermore, BIA OJS officials reported that they asked each of the 33 tribes where the tribes were in the implementation process, what challenges they are currently facing, and what if any, assistance BIA had previously offered the tribes, but did not ask what additional help these tribes would like to receive from BIA OJS. SMART Office representatives said that in addition to tribes’ lack of access to NCIC, lack of BIA assistance is the second most frequently reported SORNA implementation challenge that tribes have communicated to the SMART Office. A BIA OJS senior official stated that the agency could better support tribes that choose to retain their SORNA implementation authority, and attributed the agency’s challenges in providing this assistance to several factors. First, according to the senior BIA official, tribes experience high staff turnover and incoming staff are often unaware of assistance that BIA may have offered in the past. Therefore, the BIA OJS senior official stated, it is important for BIA to reach out to tribes more than once. Second, the BIA senior official also said that not all BIA field officers have received training on SORNA or how to assist tribes with SORNA implementation. BIA officials said that the agency is working on providing training to its officers so that they have the base knowledge on assisting tribes with SORNA implementation. This training, BIA officials added, will be open to all tribal police chiefs and mandatory for BIA police chiefs. Third, because BIA conducted outreach to 33 and not all 164 tribes that retained their authority to implement SORNA, BIA may not be aware of the needs of the tribes it has not contacted. BIA officials acknowledged that they could expand their outreach to include all tribes that retained their authority to implement SORNA, including tribes for which the agency does not provide direct services. Taking steps to ascertain what, if any, resource or other needs all tribes that have not implemented SORNA may have could better support the tribes’ efforts to substantially implement the act, and thereby better ensure monitoring of convicted sex offenders on tribal lands. Those states with tribes that are not implementing SORNA—that is, tribes that are not eligible to implement SORNA and tribes whose implementation authority was delegated to a state—reported that the states are including convicted sex offenders on these tribes’ lands in the states’ own sex offender registration and notification systems. However, states are not consistently notifying these tribes about registered sex offenders who plan to live, work, or attend school on tribal lands upon release from state prison—similar to the problem we discussed earlier that tribes that retained their implementation authority experienced. As a result, some ineligible and delegated tribes are unaware of convicted sex offenders on their lands, in which case they are not able to take actions they deem appropriate to ensure public safety, such as banning certain sex offenders from living on their land. Under SORNA, states are responsible for registering convicted sex offenders who live, work, or attend school on the territory of tribes that have not retained SORNA implementation authority, which are ineligible tribes subject to state criminal jurisdiction under 18 U.S.C. § 1162, as well as tribes whose SORNA functions have been delegated to a state. We found that states have incorporated ineligible and delegated tribes into state sex offender registries, as required. With respect to ineligible tribes, we interviewed sex offender registry and SORNA officials in the 6 states where these tribes are located, and they told us that they include convicted sex offenders who live, work, or attend school on tribal lands in the state sex offender registry and had mostly experienced no challenges in doing so. Tribal officials from 7 of 8 ineligible tribes we interviewed confirmed that the states had incorporated convicted sex offenders who live on their lands into the state sex offender registry. An official from the remaining ineligible tribe was not aware of any efforts by the state to include sex offenders who live, work, or attend school on the tribe’s land into the state’s sex offender registration system. Officials from 5 of the 6 states where we conducted interviews also reported that their states had experienced no challenges with incorporating convicted sex offenders from ineligible tribes into the state sex offender registry. These state officials reported that for registration purposes, convicted sex offenders on tribal land are required to report to the tribe, a nearby county or city to register. Officials from 3 of the 5 states added that they treat tribes like any other local jurisdiction. Sex offender registry officials we interviewed from the sixth state said that many tribes in their state are located in remote areas, and this poses a challenge for convicted sex offenders located on tribal lands to travel to the nearest county to register. The remote location of these tribes also makes it difficult for county law enforcement to monitor convicted sex offenders to ensure that they are complying with the terms of their registration. According to tribal and sex offender registry officials from this state, the state has tried to address the location challenge by giving offenders who live in remote areas the choice to register in person, by mail or fax and having the tribal community complete forms to verify the addresses of tribal members who are registered sex offenders. With respect to delegated tribes, SORNA and sex offender registry officials from 10 of the 12 states with these tribes that provided written responses to our questions reported that the states incorporate or expect to incorporate convicted sex offenders from delegated tribes into their However, officials from 5 of the state sex offender registry, as required.12 states reported difficulties doing so. These included contacting but not receiving a response from tribes and resolving differing requirements between state laws and SORNA regarding registration of sex offenders and community notification. Nevertheless, we found that states were able to overcome these challenges and register and monitor offenders, as required. We reported earlier in this report that some states are not notifying tribes that retained their SORNA implementation authority following registration of a sex offender who plans to live, work, or attend school on tribal land upon release from state prison. Similarly, sex offender registry officials we interviewed from 3 of the 6 states where ineligible tribes are located told us that they do not notify ineligible tribes, and officials from 7 of the 12 states with delegated tribes that responded to our inquiry reported that they also do not notify tribes. Officials from the remaining 3 states where ineligible tribes are located were unsure or did not state whether they notified ineligible tribes while officials from the remaining 5 states with delegated tribes said that they provide tribes with this information. The specific SORNA provision that requires such notification to ineligible and delegated tribes is section 121(b)(2). It states that immediately after a convicted sex offender registers or updates a registration, an appropriate official in the registration jurisdiction—in this instance, the state—shall provide the information in the registry (other than information exempted from disclosure by the Attorney General) about that offender to all appropriate law enforcement agencies, as well as schools, and public housing agencies, in each area where the offender resides, works, or attends school. Therefore, states should notify all ineligible and delegated tribes that have a law enforcement agency, a school, or a public housing agency when a convicted sex offender registers with the state and plans to live, work, or attend school on tribal land. Of the 50 delegated tribes, 22 have a police department, at least 10 have schools, Similarly, of the 353 ineligible tribes, 17 and 38 have public housing.have their own police department, at least 165 have schools, and 323 have public housing. Yet, states are not consistently notifying these tribes about sex offenders who were released from state prison and registered or updated a registration with the state indicating that they will be living, working, or attending school on tribal lands, as SORNA requires. Without such notification, tribes may be unaware, as noted earlier in this report, of the presence of convicted sex offenders on their lands and may be unable to enforce tribal law and ordinances related to sex offenders. States are not providing notification to ineligible and delegated tribes that have police departments for a couple of reasons. First, 1 state has determined that tribal law enforcement agencies do not meet the state’s definition of a law enforcement agency, in which case the state would not provide this notice. Second, in the absence of state laws or policies to notify tribes, some states let the counties decide on their own whether or not to notify tribal law enforcement, but as we reported earlier in this report, this could be a concern if the tribal and county law enforcement officials do not have a good working relationship. With respect to the requirement that states notify schools and public housing agencies, the National Guidelines indicate that it is potentially challenging for states to proactively notify schools and public housing agencies because of problems such as identifying comprehensive lists of recipients and maintaining up-to-date contact information for each school and public housing agency. For example, Oklahoma has approximately 1,800 public schools. SMART officials stated that California and Alaska, which have the vast majority of ineligible tribes, might find it particularly challenging. Therefore, the National Guidelines allow states to satisfy the requirement to immediately notify schools and public housing agencies, as well as other community and social service organizations, by posting information on the state’s sex offender public website within 3 days of the offender’s registration or update, provided that the website includes an automatic notification function whereby requesters may receive e-mails when new information pertaining to a ZIP code or geographic radius area is available. However, this poses challenges. First, tribes can only search by name, ZIP code, county, city or geographic radius, and as we discussed previously, it would be difficult for the tribe to know whether a convicted sex offender who lives within the same ZIP code as the tribe actually lives on tribal land. Second, some states have opted not to include all convicted sex offenders on their public websites, as SORNA allows. For example, jurisdictions can exclude information about tier I offenders whose victims were not minors. Also, with regard to tribes, an alternative to directly notifying the school or public housing agency located on tribal land could be that the state notify the tribal council or another entity that the tribe deems appropriate. Given the relatively small number of tribes in most states, the number of tribes is likely to be much lower than the number of public housing agencies and schools, making it easier to obtain contact information on tribes and notify them of released offenders. For example, Nevada has 673 public schools, but only 11 delegated tribes, and is the state with the greatest number of delegated tribes. States with ineligible tribes, however, may have a larger number of tribes and find it more difficult to contact each tribe. For example, Alaska and California in particular could find it difficult considering that 226 and 108 ineligible tribes have territory in these states, respectively. According to the SMART Office, it has taken steps to inform both implementing and nonimplementing states about tribes whose authority has been delegated to the state, and has provided states with points of contact for these tribes. However, the SMART Office has not explicitly advised states, as part of these efforts, to notify delegated tribes about registered sex offenders who intend to live, work, or attend school on tribal land upon release from state prison. Given that the SMART Office conducts annual compliance reviews of implementing states and conducts ongoing outreach to implementing as well as nonimplementing states on a range of SORNA issues, the office is well positioned to provide explicit guidance to states regarding the necessary steps to implement SORNA’s notification requirement for ineligible and delegated tribes. Also, through its outreach and compliance reviews, the SMART Office has the opportunity to consult with states on steps they can take to overcome some of the barriers that make it difficult for states to provide these notifications. Such efforts on behalf of the SMART Office would enable ineligible and delegated tribes to take actions they deem necessary to ensure public safety when a convicted sex offender is released from state prison and has registered or updated a registration indicating that he or she will be living, working, or attending school on their lands. SORNA sought to enhance public safety, in part, by requiring eligible tribes to develop their own sex offender registration and notification systems to help address concerns about convicted sex offenders avoiding registration by moving to tribal lands. Considering that many tribes did not have their own sex offender registration systems prior to SORNA, tribes faced a number of challenges in implementing the act, such as lack of access to federal criminal justice databases and difficulty covering implementation costs. Several agencies provided assistance to help tribes address these challenges, but according to tribes that responded to our survey, additional assistance and communication, particularly from BIA, would be beneficial. Also, as part of the SMART Office’s annual compliance review of implementing states, and ongoing outreach with implementing and nonimplementing states, determining whether states are notifying tribes that retained their authority about convicted sex offenders who initially register or update a registration indicating that they will be living, working, or attending school on tribal land upon release from state prison could help ensure that tribes are aware of and can monitor convicted sex offenders in their communities. Further, taking additional actions to ensure that states notify all tribes, including ineligible and delegated tribes, would better position tribes to enforce their own laws and policies regarding sex offenders who live, work, or attend school on their land. Further, these actions would enhance DOJ’s initiative to increase public safety in tribal communities by providing the federal support these communities need to overcome the unique law enforcement challenges they face, which, as noted by the Attorney General in his 2009 launching of the initiative, had resulted in unacceptable rates of violence against women and children in tribal communities. To help ensure that tribes that retained their SORNA implementation authority, as well as ineligible and delegated tribes, are notified when convicted sex offenders who intend to live, work, or attend school on tribal land initially register or update a registration, we recommend that the Assistant Attorney General for the Office of Justice Programs direct the SMART Office to take the following two actions: Amend its substantial implementation assurance guidance to require its evaluators to determine whether substantially implemented states are notifying tribes that retained their authority to implement SORNA, as well as ineligible and delegated tribes, as required, and use this information in the office’s ongoing outreach to the states. Encourage implementing as well as nonimplementing states, as part of the office’s ongoing outreach to these states, to develop policies and procedures to notify tribes that retained their authority to implement SORNA, as well as ineligible and delegated tribes, about convicted sex offenders who registered or updated a registration with the state—either prior or subsequent to their release from state prison—and indicated that they will be living, working, or attending school on tribal lands. As part of this outreach and to help states overcome any barriers to notifying tribes, educate state officials on best practices for overcoming barriers to notifying tribes that the SMART Office has identified. To help determine what additional assistance tribes may need to help them toward SORNA implementation, we recommend that the Director of the Bureau of Indian Affairs direct each Office of Justice Services district office to contact the tribes within its district that have retained their implementation authority to determine if the tribes would like BIA to assist them with SORNA implementation efforts. For tribes that would like assistance from BIA, where possible, BIA should provide this assistance or refer the tribe to other resources. We provided a draft of this report to Interior and DOJ for review and comment. We received written comments from DOI via e-mail and from DOJ in a formal letter, which is reproduced in full in appendix IV. DOI and DOJ agreed with our recommendations in their comments. We also received technical comments from DOJ, which are incorporated throughout our report as appropriate. In comments that DOI provided via e-mail on October 21, 2014, the department agreed with our recommendation that BIA direct each OJS district office to contact the tribes within its district that have retained their implementation authority, to determine if the tribes would like BIA to assist them with SORNA implementation efforts. DOI noted that the department plans to send correspondence to each of these tribes to determine if the tribes need BIA assistance with SORNA implementation and will provide the necessary assistance to these tribes, considering the department’s authority, access, and resources. Further, DOI stated that it will reach out to states and other entities outside of OJS’s control that may be posing challenges to tribes’ ability to comply with SORNA. We believe that these are beneficial steps that, once fully implemented, would address our recommendation and help address resource or other needs that have made it difficult for some tribes to substantially implement the act. In addition to taking actions to address our recommendation, DOI also identified additional efforts that BIA OJS has under way to help ensure tribes are able to implement SORNA. For example, BIA OJS has renewed its relationship with the SMART Office so as to receive internal policy and procedure updates, as well as training opportunities for OJS employees on SORNA. DOJ agreed with our recommendation that the SMART Office amend its substantial implementation assurance guidance to require its evaluators to determine whether substantially implemented states are notifying tribes that retained their authority to implement SORNA, as well as ineligible and delegated tribes. The department also noted in its response that the SMART Office will continue to propose individualized solutions for jurisdictions with legislation or public policy practices that inhibit or prohibit working with tribal law enforcement. DOJ also agreed with our recommendation to encourage implementing as well as nonimplementing states to develop policies and procedures to notify all tribes about sex offenders who would be living, working, or attending school on tribal lands, either prior or subsequent to their release from state prison. In its letter, DOJ noted that the SMART Office will request in future substantial implementation reviews that states provide information on how state incarceration facilities notify tribes when sex offenders are released and have indicated that they will be living, working, or attending school on tribal land. Accordingly, the SMART office will use the information solicited from these reviews in its ongoing outreach to implementing as well as nonimplementing states to help improve their communication with tribes. DOJ stated that to the extent possible, the department will encourage the sharing of sex offender information through the SORNA Exchange Portal—the tool that facilitates the exchange of information about sex offenders moving to other SORNA jurisdictions—as well as through extensive outreach and education to states on how they can use public notification to share information with ineligible and delegated tribes that do not have access to this portal. We believe that, once implemented, these actions would address our recommendations and enhance tribes’ ability to receive the notification they need in order to enforce their laws pertaining to sex offenders. Although DOJ agreed with our recommendations, the department expressed concerns that our report is not responsive to the primary challenge that tribes that responded to our survey identified—that is, an inability to submit sex offender information to federal criminal databases. However, we noted in this report that at the time of our review, DOJ and BIA had established a Tribal Public Safety Working Group to identify which tribes lack access to federal databases and to work on the appropriate solution based on the unique circumstances of each tribe. We believe that it is too soon to evaluate the effectiveness of the working group’s current efforts, given that the group is in its early planning phase. We are sending copies of this report to the appropriate congressional committees, the Attorney General, the Secretary of Interior, and other interested parties. This report is also available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512- 8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff acknowledgments are provided in appendix IV. Our objectives for this report were to address the following questions: To what extent have eligible tribes retained their authority to implement the Sex Offender Registration and Notification Act (SORNA), and for those that did, what is their implementation status? What implementation challenges, if any, have tribes that retained their authority to implement SORNA reported, and what steps have federal agencies and others taken or could take to address these challenges? To what extent are states incorporating ineligible and delegated tribes into their state sex offender registration and notification systems? To address our first two objectives, we surveyed tribal officials whom the Office of Sex Offender Sentencing, Monitoring, Apprehension, Registering, and Tracking (SMART Office) identified as being responsible for implementing the act in the 164 tribes that retained their authority to implement SORNA.representatives of tribal police departments and court systems. We used the survey to identify the implementation status of tribes that retained their authority to implement SORNA; the challenges they face with implementation; and the steps that tribes, as well as state, local, and federal government entities, are taking or could take to address implementation challenges. Additionally, we used the survey to obtain the tribal officials’ perspectives on the SMART Office’s guidance and the criteria it used to determine whether or not a jurisdiction has substantially implemented SORNA. We collaborated closely with a GAO social science survey specialist and conducted an expert review as well as pretests with 4 tribes that retained their authority to implement SORNA; doing so helped us to develop and refine our survey questions, clarify any ambiguous portions of the survey, and identify any potentially biased questions. We launched our web-based survey on April 1, 2014, and closed out the survey on June 30, 2014. Login information for the web- These officials included tribal chiefs as well as based survey was e-mailed to all participants, and we sent three follow-up e-mail messages to all nonrespondents and contacted the remaining nonrespondents by telephone. We received responses from 80 percent (129 of 161) of all tribes surveyed. Not all survey respondents provided answers to all survey questions. Because the survey was conducted with all tribes that retained their authority to implement SORNA, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages to minimize such nonsampling errors. We also made multiple contact attempts with nonrespondents during the survey by e-mail and telephone. Since this was a web-based survey, respondents entered their answers directly into the electronic questionnaire, eliminating the need to key data into a database, minimizing error. We examined the survey results and performed computer analyses to identify inconsistencies and other indications of error. A second independent analyst checked the accuracy of all computer analyses. To address our first objective, we also reviewed the SMART Office’s data on eligible tribes’ implementation status to identify (1) tribes that have retained their authority to implement SORNA, (2) tribes that have delegated their implementation authority to a state, and (3) tribes whose implementation authority was delegated to a state by the SMART Office. To assess the reliability of the SMART Office’s data on tribes’ implementation status, we (1) obtained written responses to our questions on how SMART Office representatives who use and maintain the data ensure the data’s reliability; (2) checked the data for missing information, and obvious errors; and (3) compared the data regarding tribes’ responses against our survey questions regarding their implementation status and, where applicable, interviewed SMART Office officials to determine the reason(s) for and resolve any differences. We found the data to be sufficiently reliable for the purpose of identifying tribes’ SORNA implementation status. We also used our survey to determine how long it took tribes to prepare and submit a complete implementation package for SMART Office review, the reasons why some tribes have not submitted a complete package, and when the latter anticipate submitting a complete package. To address our second objective, we interviewed Department of Justice (DOJ) headquarters officials from the SMART Office, Office of Tribal Justice (OTJ), U.S. Marshals Service (USMS), and the Federal Bureau of Investigation (FBI), as well as Bureau of Indian Affairs (BIA) officials within the Department of the Interior, to determine the implementation challenges tribes have reported and steps the agencies, the tribes, and others have taken to address the challenges. In addition, we interviewed FBI Criminal Justice Information Services Division (CJIS) officials to determine CJIS policies and procedures for granting tribes access to federal criminal justice databases, which are required for SORNA implementation. We also interviewed state, tribal, as well as local law enforcement officials from 5 states—Florida, Michigan, Nevada, New York, and Oklahoma. We selected these 5 states because each contains territory of tribes that retained their authority to implement SORNA and because the states vary with regard to their SORNA implementation status and geographic diversity. In each of the 5 states, we interviewed CJIS System Agency (CSA) officials, the designated state SORNA contact, or state sex offender registry officials to obtain information about state laws, policies, and procedures for granting tribes access to federal criminal justice databases. We also interviewed tribal leaders from 9 eligible tribes in these 5 states—5 of these tribes retained their authority to implement SORNA. We selected the 9 tribes based on factors such as the tribes’ implementation status, whether the tribes are direct service tribes, and whether the tribes have agreements for SORNA implementation with any local law enforcement agencies, as well as the challenges that the SMART Office and others reported that these tribes face with implementation. In addition to the 5 states, we contacted officials in 20 states to determine whether these states require their state departments of corrections or registries to notify tribes that have retained their authorities to implement SORNA upon releasing a sex offender who plans to live, work or attend school on tribal lands. Officials from 15 of the 20 states responded to our request. These 15 states plus the 5 we visited included states identified by tribes that responded to our survey as those that do not notify tribes upon releasing sex offenders who plan to live, work, or attend school on tribal lands, as well as states that have substantially implemented SORNA and also have tribes that have retained their authority to implement SORNA. Finally, we interviewed officials from four local law enforcement agencies in proximity to the selected tribes and with whom the tribes had an agreement for SORNA implementation, as well as officials from the two BIA Office of Justice Services (OJS) regional districts with the most direct service tribes that are implementing SORNA to determine what, if any, assistance they have provided the tribes with SORNA implementation. Although the perspectives we obtained from our interviews with these state, BIA, and tribal officials are not generalizable, they provided insights regarding the challenges that tribes face with implementing SORNA and actions that have or could be taken to address the challenges. We also included questions in our survey of tribes that retained their authority to implement SORNA about the types and extent of challenges the tribes experienced with SORNA implementation; steps the tribes are taking to address the challenges; and the funding and other assistance the tribes have received or could receive from the SMART Office, USMS, BIA, and state or local law enforcement agencies to assist them with implementing the act. In addition, we obtained information from our survey that enabled us to identify tribes that have the required access to the National Crime Information Center (NCIC) and the National Sex Offender Registry (NSOR), and the reasons why some tribes do not have To determine if any improvements are needed to address this access.any of the challenges tribes identified with SORNA implementation, we compared our survey results against SORNA provisions as well as the National Guidelines for Sex Offender Registration and Notification. To address our third objective, we interviewed state SORNA and sex offender registry officials from all 6 states where ineligible tribes are located as well as 8 ineligible tribes—at least 1 each from these 6 states. The information from these interviews enabled us to determine if these states include the ineligible tribes in their state SORNA registration and notification systems, as SORNA requires, as well as to identify the challenges, if any, the states and tribes face with having the states implement SORNA on the tribes’ behalf. We selected the tribes based on their SORNA implementation status, whether they submitted a resolution to implement SORNA in spite of their ineligibility, and the challenges the tribes reportedly faced with SORNA implementation. We also interviewed tribal leaders from 4 tribes that opted out of SORNA and delegated their registration and notification functions to the state, and solicited responses to written questions from sex offender registry officials in the 15 states to which tribes’ authority have been delegated, to determine how states are incorporating delegated tribes into their state sex offender registration and notification systems, as SORNA requires. To supplement the information we received from our interviews as well as our survey of tribes that retained their SORNA authority, we interviewed officials from three relevant national associations: the National Congress of American Indians, the International Association of Chiefs of Police, and the National Criminal Justice Association, to obtain their perspectives on SORNA eligibility criteria, the SMART Office’s delegation criteria and process, the challenges that tribes face with implementation, and the actions that the SMART Office and others have taken to address these challenges. We selected these associations because they represent the interests of tribal communities or state and local law enforcement agencies that assist jurisdictions, including tribes, with SORNA implementation. We conducted this performance audit from September 2013 to November 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Office of Sex Offender Sentencing, Monitoring, Apprehending, Registering, and Tracking (SMART Office) has developed the Sex Offender Registration and Notification Act (SORNA) Substantial Implementation Checklist tool to be used by jurisdictions in developing, organizing, and submitting a substantial implementation package for review. While not intended to be a definitive guide to SORNA’s full implementation requirements, the SORNA checklist is organized into 14 sections covering the major requirements of the law, as shown in table 1. Appendix III: Extent to Which Eligible Tribes Have Implemented the Sex Offender Registration and Notification Act (SORNA) According to the Office of Sex Offender Sentencing, Monitoring, Apprehending, Registering, and Tracking (SMART Office), the 214 tribes that are eligible to implement SORNA are located in 33 states; of these states, 11 have substantially implemented SORNA. As of August 2014, the SMART Office has determined that 164 tribes have retained their authority to implement SORNA, while the remainder did not retain their authority because they either elected to delegate their authority to a state (24 tribes) or the SMART Office delegated their authority to a state (26 tribes). Of the tribes that have retained their authority to implement the act, 71 tribes have substantially implemented SORNA; 70 tribes have submitted a complete package, but the SMART Office has not yet made a determination; 22 tribes have not submitted a complete package; and 1 tribe has not substantially implemented the act. In addition to the contact named above, Kristy Love, Assistant Director, and Edith Sohna, Analyst-in-Charge, managed this engagement. Orlando Copeland, Michael Lenington, Alicia Loucks, and Leah Marshall made significant contributions to the report. Frances Cook, Katherine Davis, Michele Fejfar, Eric Hauswirth, Kirsten Lauber, Sasan J. Najmi, and Jerome Sandau also provided valuable assistance.
According to DOJ, tribal nations are disproportionately affected by violent crimes and sex offenses in particular. In 2006, Congress passed SORNA, which introduced new sex offender registration and notification standards for states, territories, and eligible tribes. The act made special provisions for eligible tribes to elect either to act as registration jurisdictions or to delegate SORNA functions to the states in which they are located. GAO was asked to assess the status of tribes' efforts to implement SORNA and the challenges they face doing so. This report addresses, among other things, (1) the extent to which eligible tribes have retained their authority to implement, and for those that did, describe their implementation status and (2) implementation challenges tribes that retained their authority reported, and steps federal agencies have taken or could take to address these challenges. GAO reviewed data on eligible tribes' implementation status; conducted a survey of tribes that retained their authority; and interviewed federal, state, and local officials. Most eligible tribes have retained their Sex Offender Registration and Notification Act (SORNA) implementation authority and have either substantially implemented the act or are in the process of doing so. As of August 2014, 77 percent (164 of the 214) of eligible tribes had retained their implementation authority. Tribes that lacked the resources, among other factors, to implement SORNA either delegated their own authority, or the SMART Office delegated the tribe's authority, to a state. According to the Office of Sex Offender Sentencing, Monitoring, Apprehending, Registering, and Tracking (SMART Office)—the office SORNA established within the Department of Justice (DOJ) to administer and assist jurisdictions with implementing the law—43 percent (71 of 164) of tribes that retained their authority to implement SORNA have substantially implemented the act; the SMART Office has not yet made a final determination on 43 percent (70 of 164); and 13 percent (22 of 164) have not submitted complete packages. The SMART Office determined that 1 tribe has not yet substantially implemented SORNA. In GAO's survey of tribes that retained their authority, the four most frequently reported implementation challenges included inability to submit convicted sex offender information to federal databases, lack of notification from state prisons upon the release of sex offenders, lack of staff, and inability to cover the costs of SORNA implementation. Federal agencies have taken steps to address these challenges, but more could be done. For example, DOJ and the Bureau of Indian Affairs (BIA) within the Department of the Interior (Interior) have formed a working group to better coordinate federal efforts to address tribes' difficulties submitting convicted sex offender information to federal databases. However, some states have not notified tribes—those that retained their SORNA authority, as well as ineligible and delegated tribes—when sex offenders who will be or have been released from state prison register with the state and indicate that they intend to live, work or attend school on tribal land, as SORNA requires; and while the SMART Office has taken some actions, more could be done to encourage states to provide notification to tribes. Such notification would help tribes identify and monitor sex offenders who live on their lands and enforce tribal laws pertaining to sex offenders. The SMART Office, U.S. Marshals Service, and BIA provided financial assistance, equipment, and staff to help tribes address their resource needs. However, BIA offered assistance only to tribes for which BIA provides direct law enforcement services, which account for only 20 percent of the tribes that retained their SORNA implementation authority, even though BIA is responsible for assisting and advising all federally recognized tribes regarding their law enforcement and public safety needs. Taking steps to ascertain what, if any, resource or other needs all tribes that retained their authority may have could better position BIA to support the tribes' efforts to implement the act. GAO recommends that, among other things, the SMART Office encourage states to notify tribes about offenders who plan to live, work, or attend school on tribal land upon release from prison. GAO also recommends that BIA reach out to all tribes that retained their authority to determine what, if any, assistance they would like from BIA. DOJ and Interior concurred.
Since DHS began operations in March 2003, it has developed and implemented key policies, programs, and activities for implementing its homeland security missions and functions that have created and strengthened a foundation for achieving its potential as it continues to mature. We reported in our assessment of DHS’s progress and challenges 10 years after the September 11 attacks, as well as in our more recent work, that the department has implemented key homeland security operations and achieved important goals in many areas. These included developing strategic and operational plans across its range of missions; hiring, deploying, and training workforces; establishing new, or expanding existing, offices and programs; and developing and issuing policies, procedures, and regulations to govern its homeland security operations. For example: DHS successfully hired, trained, and deployed workforces, including the federal screening workforce to assume screening responsibilities at airports nationwide, and about 20,000 agents to patrol U.S. land borders. DHS also created new programs and offices, or expanded existing ones, to implement key homeland security responsibilities, such as establishing the National Cybersecurity and Communications Integration Center to, among other things, coordinate the nation’s efforts to prepare for, prevent, and respond to cyber threats to systems and communications networks. DHS issued policies and procedures addressing, among other things, the screening of passengers at airport checkpoints, inspecting travelers seeking entry into the United States, and assessing immigration benefit applications and processes for detecting possible fraud. DHS issued the National Response Framework, which outlines disaster response guiding principles, including major roles and responsibilities of government, nongovernmental organizations, and private sector entities for response to disasters of all sizes and causes. After initial difficulty in fielding the program, DHS developed and implemented Secure Flight, a passenger prescreening program through which the federal government now screens all passengers on all commercial flights to, from, and within the United States. In fiscal year 2011, DHS reported data indicating it had met its interim goal to secure the land border with a decrease in apprehensions. Our data analysis showed that apprehensions decreased within each southwest border sector and by 68 percent in the Tucson sector from fiscal years 2006 through 2011. Border Patrol officials attributed this decrease in part to changes in the U.S. economy and achievement of Border Patrol strategic objectives. We reported in September 2012 that DHS, through its component agencies, particularly the Coast Guard and U.S. Customs and Border Protection (CBP), has made substantial progress in implementing various programs that, collectively, have improved maritime security. For example, in November 2011, we reported that the Coast Guard’s risk assessment model generally met DHS criteria for being complete, reproducible, documented, and defensible. Coast Guard units throughout the country use this risk model to improve maritime domain awareness and better assess security risks to key maritime infrastructure. DHS has taken important actions to conduct voluntary critical infrastructure and key resources (CIKR) security surveys and vulnerability assessments, provide information to CIKR stakeholders, and assess the effectiveness of security surveys and vulnerability assessments. GAO, Aviation Security: Efforts to Validate TSA’s Passenger Screening Behavior Detection Program Underway, but Opportunities Exist to Strengthen Validation and Address Operational Challenges, GAO-10-763 (Washington, D.C.: May 20, 2010). of $1.5 billion. To develop the Plan, DHS conducted an analysis of alternatives and outreach to potential vendors, and took other steps to test the viability of the current system. However, DHS has not documented the analysis justifying the specific types, quantities, and deployment locations of border surveillance technologies proposed in the Plan, or defined the mission benefits or developed performance metrics to assess its implementation of the Plan. We are reviewing DHS’s efforts to implement the Plan, and we expect to report on the results of our work later this year. DHS spent more than $200 million on advanced spectroscopic portals, used to detect smuggled nuclear or radiological materials, without issuing an accurate analysis of both the benefits and the costs—which we later estimated at over $2 billion—and a determination of whether additional detection capabilities were worth the additional costs. DHS subsequently canceled the advanced spectroscopic portals program as originally conceived. Each year DHS processes millions of applications and petitions for more than 50 types of immigrant- and nonimmigrant-related benefits for persons seeking to study, work, visit, or live in the United States, and for persons seeking to become U.S. citizens. DHS embarked on a major initiative in 2005 to transform its current paper-based system into an electronic account–based system that is to use electronic adjudication and account- based case management tools, including tools that are to allow applicants to apply online for benefits. However, DHS did not consistently follow the acquisition management approach outlined in its management directives in developing and managing the program. The lack of defined requirements, acquisition strategy, and associated cost parameters contributed to program deployment delays of over 2 years. In addition, DHS estimates that through fiscal year 2011, it spent about $703 million, about $292 million more than the original program baseline estimate. We found that DHS could reduce the costs to the federal government related to major disasters declared by the President by updating the principal indicator on which disaster funding decisions are based and better measuring a state’s capacity to respond without federal assistance. From fiscal years 2004 through 2011, the President approved 539 major disaster declarations at a cost of $78.7 billion. Our work on DHS’s mission functions and crosscutting issues has identified three key themes—leading and coordinating the homeland security enterprise, implementing and integrating management functions for results, and strategically managing risks and assessing homeland security efforts—that have impacted the department’s progress since it began operations. As these themes have contributed to challenges in the department’s management and operations, addressing them can result in increased efficiencies and effectiveness. For example, DHS can help reduce cost overruns and performance shortfalls by strengthening the management of its acquisitions, and reduce inefficiencies and costs for homeland security by improving its research and development (R&D) management. These themes provide insights that can inform DHS’s efforts as it works to implement its missions within a dynamic and evolving homeland security environment. DHS made progress and has had successes in all of these areas, but our work found that these themes have been at the foundation of DHS’s implementation challenges, and need to be addressed from a department-wide perspective to effectively and efficiently position the department for the future. DHS is one of a number of entities with a role in securing the homeland and has significant leadership and coordination responsibilities for managing efforts across the homeland security enterprise. To satisfy these responsibilities, it is critically important that DHS develop, maintain, and leverage effective partnerships with its stakeholders while at the same time addressing DHS-specific responsibilities in satisfying its missions. DHS has made important strides in providing leadership and coordinating efforts across the homeland security enterprise, but needs to take additional actions to forge effective partnerships and strengthen the sharing and utilization of information. For example, DHS has improved coordination and clarified roles with state and local governments for emergency management. DHS also strengthened its partnerships and collaboration with foreign governments to coordinate and standardize security practices for aviation security. The department has further demonstrated leadership by establishing a governance board to serve as the decision-making body for DHS information-sharing issues. The board has enhanced collaboration among DHS components and identified a list of key information-sharing initiatives. Although DHS has made important progress, more work remains. We designated terrorism-related information sharing as high risk in 2005 because the government faces significant challenges in analyzing and disseminating this information in a timely, accurate, and useful manner. In our most recent high-risk update, we reported that the federal government’s leadership structure is committed to enhancing the sharing and management of terrorism-related information and has made significant progress defining a governance structure to implement the Information Sharing Environment—an approach that is intended to serve as an overarching solution to strengthening sharing. However, we also reported that the key departments and agencies responsible for information-sharing activities, including DHS, need to continue their efforts to share and manage terrorism-related information by, among other things, identifying technological capabilities and services that can be shared across departments and developing metrics that measure the performance of, and results achieved by, projects and activities. DHS officials explained that its information-sharing initiatives are integral to its mission activities and are funded through its components’ respective budgets. However, in September 2012 we reported that five of DHS’s top eight priority information-sharing initiatives faced funding shortfalls, and DHS had to delay or scale back at least four of them. Following its establishment, DHS focused its efforts primarily on implementing its various missions to meet pressing homeland security needs and threats, and less on creating and integrating a fully and effectively functioning department. As the department matured, it has put into place management policies and processes and made a range of other enhancements to its management functions, which include acquisition, information technology, financial, and human capital management. However, DHS has not always effectively executed or integrated these functions. While challenges remain for DHS to address across its range of missions, the department has made considerable progress in transforming its original component agencies into a single cabinet-level department and positioning itself to achieve its full potential. Important strides have also been made in strengthening the department’s management functions and in integrating those functions across the department, particularly in recent years. However, continued progress is needed in order to mitigate the risks that management weaknesses pose to mission accomplishment and the efficient and effective use of the department’s resources. In particular, the department needs to demonstrate continued progress in implementing and strengthening key management initiatives and addressing corrective actions and outcomes that GAO identified, and DHS committed to taking actions address this high-risk area. For example: Acquisition management: Although DHS has made progress in strengthening its acquisition function, most of the department’s major acquisition programs continue to cost more than expected, take longer to deploy than planned, or deliver less capability than promised. We identified 42 programs that experienced cost growth, schedule slips, or both, with 16 of the programs’ costs increasing from a total of $19.7 billion in 2008 to $52.2 billion in 2011—an aggregate increase of 166 percent. We reported in September 2012 that DHS leadership has authorized and continued to invest in major acquisition programs even though the vast majority of those programs lack foundational documents demonstrating the knowledge needed to help manage risks and measure performance. We recommended that DHS modify acquisition policy to better reflect key program and portfolio management practices and ensure acquisition programs fully comply with DHS acquisition policy. DHS concurred with our recommendations and reported taking actions to address some of them. Information technology management: DHS has defined and begun to implement a vision for a tiered governance structure intended to improve information technology (IT) program and portfolio management, which is generally consistent with best practices. However, the governance structure covers less than 20 percent (about 16 of 80) of DHS’s major IT investments and 3 of its 13 portfolios, and the department has not yet finalized the policies and procedures associated with this structure. In July 2012, we recommended that DHS finalize the policies and procedures and continue to implement the structure. DHS agreed with these recommendations and estimated it would address them by September 2013. Financial management: DHS has, among other things, received a qualified audit opinion on its fiscal year 2012 financial statements. DHS is working to resolve the audit qualification to obtain an unqualified opinion for fiscal year 2013. However, DHS components are currently in the early planning stages of their financial systems modernization efforts, and until these efforts are complete, their current systems will continue to inadequately support effective financial management, in part because of their lack of substantial compliance with key federal financial management requirements. Without sound controls and systems, DHS faces challenges in obtaining and sustaining audit opinions on its financial statement and internal controls over financial reporting, as well as ensuring its financial management systems generate reliable, useful, and timely information for day-to-day decision making. Human capital management: In December 2012, we identified several factors that have hampered DHS’s strategic workforce planning efforts and recommended, among other things, that DHS identify and document additional performance measures to assess workforce planning efforts. DHS agreed with these recommendations and stated that it plans to take actions to address them. In addition, DHS has made efforts to improve employee morale, such as taking actions to determine the root causes of morale problems. Despite these efforts, however, federal surveys have consistently found that DHS employees are less satisfied with their jobs than the government-wide average. In September 2012, we recommended, among other things, that DHS improve its root cause analysis efforts of morale issues. DHS agreed with these recommendations and noted actions it plans to take to address them. Forming a new department while working to implement statutorily mandated and department-initiated programs and responding to evolving threats, was, and is, a significant challenge facing DHS. Key threats, such as attempted attacks against the aviation sector, have impacted and altered DHS’s approaches and investments, such as changes DHS made to its processes and technology investments for screening passengers and baggage at airports. It is understandable that these threats had to be addressed immediately as they arose. However, limited strategic and program planning by DHS, as well as assessment to inform approaches and investment decisions, has contributed to programs not meeting strategic needs or not doing so in an efficient manner. Further, DHS has made important progress in analyzing risk across sectors, but it has more work to do in using this information to inform planning and resource-allocation decisions. Risk management has been widely supported by Congress and DHS as a management approach for homeland security, enhancing the department’s ability to make informed decisions and prioritize resource investments. Since DHS does not have unlimited resources and cannot protect the nation from every conceivable threat, it must make risk-informed decisions regarding its homeland security approaches and strategies. We reported in September 2011 that using existing risk assessment tools could assist DHS in prioritizing its Quadrennial Homeland Security Review (QHSR) implementation mechanisms. For example, examining the extent to which risk information could be used to help prioritize implementation mechanisms for the next QHSR could help DHS determine how to incorporate and use such information to strengthen prioritization and resource allocation decisions. DHS officials plan to implement a national risk assessment in advance of the next QHSR, which DHS anticipates conducting in fiscal year 2013. Our work has also found that DHS continues to miss opportunities to optimize performance across its missions due to a lack of reliable performance information or assessment of existing information; evaluation among possible alternatives; and, as appropriate, adjustment of programs or operations that are not meeting mission needs. For example, we reported in February 2013 that the government’s strategy documents related to Information Systems and the Nation’s Cyber Critical Infrastructure Protection included few milestones or performance measures, making it difficult to track progress in accomplishing stated goals and objectives. In addition, in September 2012, we reported that DHS had approved a third generation of BioWatch technology—to further enhance detection of certain pathogens in the air—without fully evaluating viable alternatives based on risk, costs, and benefits. As the department further matures and seeks to optimize its operations, DHS will need to look beyond immediate requirements; assess programs’ sustainability across the long term, particularly in light of constrained budgets; and evaluate trade-offs within and among programs across the homeland security enterprise. Doing so should better equip DHS to adapt and respond to new threats in a sustainable manner as it works to address existing ones. Given DHS’s role and leadership responsibilities in securing the homeland, it is critical that the department’s programs and activities are operating as efficiently and effectively as possible; are sustainable; and continue to mature, evolve, and adapt to address pressing security needs. Since it began operations in 2003, DHS has implemented key homeland security operations and achieved important goals and milestones in many areas. DHS has also made important progress in strengthening partnerships with stakeholders, improving its management processes and sharing of information, and enhancing its risk management and performance measurement efforts. Important strides have also been made in strengthening the department’s management functions and in integrating those functions across the department, particularly in recent years. Senior leaders at the department have also continued to demonstrate strong commitment to addressing the department’s management challenges across the management functions. These accomplishments are especially noteworthy given that the department has had to work to transform itself into a fully functioning cabinet department while implementing its missions—a difficult undertaking for any organization and one that can take years to achieve even under less daunting circumstances. Impacting the department’s efforts have been a variety of factors and events, such as attempted terrorist attacks and natural disasters, as well as new responsibilities and authorities provided by Congress and the administration. These events collectively have forced DHS to continually reassess its priorities and reallocate resources as needed, and have impacted its continued integration and transformation. Given the nature of DHS’s mission, the need to remain nimble and adaptable to respond to evolving threats, as well as to work to anticipate new ones, will not change and may become even more complex and challenging as domestic and world events unfold, particularly in light of reduced budgets and constrained resources. Our work has shown that to better position itself to address these challenges, DHS should place an increased emphasis on and take additional action in supporting and leveraging the homeland security enterprise; managing its operations to achieve needed results; and strategically planning for the future while assessing and adjusting, as needed, what exists today. DHS also needs to continue its efforts to address the associated high-risk areas that we have identified which have affected its implementation efforts. Addressing these issues will be critically important for the department to strengthen its homeland security programs and operations. DHS has indeed made significant strides in protecting the homeland, but has yet to reach its full potential. Chairman Duncan, Ranking Member Barber, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions you may have at this time. For further information regarding this testimony, please contact Cathleen A. Berrick at (202) 512-3404 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Scott Behen, Adam Hoffman, and David Maurer. Key contributors for the previous work that this testimony is based on are listed within each individual product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since the Department of Homeland Security (DHS) began operations in 2003, it has implemented key homeland security operations and achieved important goals and milestones in many areas to create and strengthen a foundation to reach its potential. As it continues to mature, however, more work remains for DHS to address gaps and weaknesses in its current operational and implementation efforts, and to strengthen the efficiency and effectiveness of those efforts. In its assessment of DHS's progress and challenges 10 years after the terrorist attacks of September 11, 2001, as well as its more recent work, GAO reported that DHS had, among other things, developed strategic and operational plans across its range of missions; established new, or expanded existing, offices and programs; and developed and issued policies, procedures, and regulations to govern its homeland security operations. However, GAO also identified that challenges remained for DHS to address across its missions. Examples of progress made and work remaining include the following: Aviation security. DHS developed and implemented Secure Flight, a program through which the federal government now prescreens all passengers on all commercial flights to, from, and within the United States. However, DHS did not validate the science supporting its behavior detection program before deploying behavior detection officers at airports, including determining whether such techniques could be successfully used to detect threats. Border security/immigration enforcement. DHS reported data indicating it had met its goal to secure the land border because of a decrease in apprehensions, attributed in part to changes in the U.S. economy and achievement of DHS strategic objectives. However, DHS has not developed a process to identify and analyze program risks, such as a process to evaluate prior and suspected cases of fraud, in its Student and Exchange Visitor Program, a program intended to, among other things, ensure that foreign students studying in the United States comply with the terms of their admission into the country. Emergency preparedness and response. DHS issued the National Response Framework, which outlines disaster response guiding principles. However, GAO reported that DHS could reduce the costs to the federal government related to major disasters declared by the President by updating the principal indicator on which disaster funding decisions are based and better measuring a state's capacity to respond without federal assistance. GAO has identified three key themes--leading and coordinating the homeland security enterprise, implementing and integrating management functions for results, and strategically managing risks and assessing homeland security efforts--that DHS needs to address from a departmentwide perspective to effectively and efficiently position the department for the future. DHS has made progress in all three areas by, among other things, providing leadership and coordination. However, DHS has continued to face challenges in all of these areas. For example, GAO reported that improving research and development could help DHS reduce, among other things, cost overruns and performance shortfalls by reducing inefficiencies and costs for homeland security. While this testimony contains no new recommendations, GAO has previously made about 1,800 recommendations to DHS designed to strengthen its programs and operations. DHS has addressed more than 60 percent of them, has efforts underway to address others, and has taken additional action to strengthen the department.
In the past year, Treasury has implemented a range of TARP programs to stabilize the financial system. As of September 11, 2009, it had disbursed just over $363 billion for TARP loans and equity investments (table 1). In addition to disbursements, participating institutions have paid Treasury billions of dollars in repurchases of preferred shares and warrants, dividend payments, and loan repayments. In particular, Treasury has received almost $7 billion in dividend payments, about $2.9 billion in warrant liquidations, and over $70 billion in repurchases from institutions participating in CPP, as of August 31, 2009. Table 1. TARP Program Description and Total Disbursements, as of September 11, 2009 (dollars in billions) Capital Purchase Program. To provide capital to viable banks through the purchase of preferred shares and subordinated debentures. Targeted Investment Program. To foster market stability and thereby strengthen the economy by making case-by- case investments in institutions that Treasury deems are critical to the functioning of the financial system. Capital Assistance Program. To restore confidence throughout the financial system that the nation’s largest banking institutions have sufficient capital to cushion themselves against larger-than-expected future losses, and to support lending to creditworthy borrowers. Systemically Significant Failing Institutions. To provide stability in financial markets and avoid disruptions to the markets from the failure of a systemically significant institution. Treasury determines participation in this program on a case-by-case basis. Asset Guarantee Program. To provide government assurances for assets held by financial institutions that are viewed as critical to the functioning of the nation’s financial system. Automotive Industry Financing Program. To prevent a significant disruption of the American automotive industry. Home Affordable Modification Program. To offer assistance to an estimated 3 to 4 million homeowners through a cost-sharing arrangement with mortgage holders and investors to reduce the monthly mortgage payment amounts of homeowners at risk of foreclosure to affordable levels. Consumer & Business Lending Initiative. To support consumer and business credit markets by providing financing to private investors to issue new securitizations to help unfreeze and lower interest rates for auto, student, and small business loans; credit cards; commercial mortgages; and other consumer and business credit. Public-Private Investment Program. To address the challenge of “legacy assets” as part of Treasury’s efforts to repair balance sheets throughout the financial system and increase the availability of credit to households and businesses. CPP continues to be the largest and most widely used program under Treasury’s TARP authority for stabilizing the financial system. Over the last year, CPP has made significant capital investments in financial institutions, and although Treasury has made progress in monitoring the activities of CPP participants, challenges remain in ensuring that participants comply with program requirements. As of September 11, 2009, CPP had provided more than $204 billion in capital to more than 670 institutions, about 56 percent of total TARP disbursements. The amount of disbursements has slowed significantly, in part, because the institutions receiving CPP capital in recent months are generally smaller than those that received capital in the beginning of the program. Also, many CPP applicants have withdrawn their applications from consideration because of uncertainties about program requirements and improving economic conditions. Consistent with our recommendations, Treasury began to collect detailed information for the largest institutions in February 2009 and basic information through monthly lending surveys from all CPP participating institutions later in June. These monthly surveys are an important step toward greater transparency and accountability for institutions of all sizes. We have also made recommendations that Treasury strengthen its oversight of participants’ compliance with the act’s program requirements (e.g., restrictions on executive compensation, dividend payments, and stock repurchases), and Treasury continues to make progress in these areas. For example, Treasury has hired three asset management firms to provide market advice about its portfolio of investments and to oversee compliance with the terms of CPP agreements. However, Treasury has yet to finalize the specific guidance and performance measures for the asset managers’ oversight responsibilities and has not established a process for monitoring asset managers’ performance. Early in the implementation of TARP, Treasury provided what it now refers to as “exceptional assistance” to three institutions—AIG, Citigroup and Bank of America. For example, Treasury, along with the Board of Governors of the Federal Reserve System (Federal Reserve), and the Federal Reserve Bank of New York (FRBNY), provided assistance to AIG, the sole participant in TARP’s Systemically Significant Failing Institutions (SSFI) program. As discussed in our recently issued report, Treasury committed $70 billion in TARP funds to AIG and, together with the Federal Reserve, had made over $182 billion available to assist the company between September 2008 and April 2009. As of September 2, 2009, the outstanding balance of federal government assistance used by AIG was $120.7 billion. In providing the assistance, Treasury and the Federal Reserve have taken several steps intended to protect the federal government’s interest. These include making loans that are secured with collateral, instituting certain controls over management, and obtaining compensation for risks such as charging interest, requiring dividend payments, and obtaining warrants. Moreover, Treasury and the FRBNY staff routinely monitor AIG’s operations and receive reports on AIG’s condition and restructuring. While these efforts are being made, however, the government remains exposed to risks, including credit and investment risk. As a result, Treasury and FRNYB may not be repaid in full. We recently reported that, as of September 21, 2009, AIG had not declared and paid the three scheduled dividend payments since the inception of the preferred equity investments. According to Treasury, if AIG fails to make its next dividend payment due on November 1, Treasury will be able to directly elect at least two board members. GAO-developed indicators of AIG’s repayment of federal assistance show some progress in AIG’s ability to repay the federal assistance; however, any improvement in the stability of AIG’s business depends on the long-term health of the company, market conditions, and continued federal government support. For this reason, the ultimate success of federal efforts to aid AIG’s restructuring and the scope of possible repayments remain unclear at this point. Also during the early phase of TARP (December 2008), Treasury established the Automotive Industry Financing Program (AIFP) to help stabilize the U.S. automotive industry and avoid disruptions that would pose systemic risk to the nation’s economy. Under this program, Treasury has committed a total of about $82.6 billion to help support automakers, automotive suppliers, consumers, and auto finance companies. Chrysler and GM have received a sizeable amount of funding to support their reorganization. In exchange, Treasury received a substantial ownership interest in the companies and debt obligations. Over the last year, Chrysler and GM filed for bankruptcy and streamlined their operations by closing factories and reducing the number of dealerships. However, whether the new Chrysler and new GM will achieve long-term financial viability remains unclear. As we have previously reported, Treasury should have a plan for ending its financial involvement with Chrysler and GM that indicates how it will divest itself of its ownership shares. In developing and implementing such a plan, Treasury should weigh the objective of expeditiously ending the federal government’s financial involvement in the companies with the objective of recovering an acceptable amount of the funding provided to them. We will report later this fall on Treasury’s approach to managing its ownership interests in the companies, how it plans to divest itself of these interests, and the progress the companies have made in restructuring since receiving federal government assistance. We also plan to report this winter on how Chrysler and GM’s restructuring efforts have affected their pension plan assets and what the federal government’s potential exposure will be should the companies terminate their plans. Following the early months of TARP implementation, which largely focused on capital investments and amid concerns about the overall strategic direction of the program and lack of transparency, the new administration has attempted to provide a more strategic direction for using the remaining funds and creating a number of programs aimed at stabilizing the securitization markets and preserving homeownership. For example, TALF, a program launched by Treasury and the Federal Reserve, has been mostly used for credit card and auto loan securitization and was extended through March 2010 for more asset classes. As of September n 17, 2009, TALF loan requests are only about a quarter of the $200 billio maximum that Treasury currently anticipates being made by FRBNY, which is much less than the $1 trillion potential expansion that the Federal Reserve and Treasury initially announced. The relatively low loan volume could be attributed to recent improvements in securitization and credit markets that make the financing terms of TALF loans less attractive, according to agency officials and certain market participants. Because the Banking Agency Audit Act (31 U.S.C. § 714) prohibits GAO from auditing certain Federal Reserve activities, we are limited in our ability to review the Federal Reserve’s actions with respect to TALF. In May 2009, legislation was passed that gave GAO authority to audit Federal Reserve actions taken with respect to three entities also assisted under TARP— AIG, Citigroup and Bank of America—but not TALF. To enable us to audit TARP support for TALF most effectively, we would support legislation to provide GAO with audit authority over Federal Reserve actions taken with respect to TALF, together with appropriate access. While TALF has been implemented, HAMP and PPIP face ongoing implementation and operational challenges. For example, HAMP faces a significant challenge that centers on uncertainty over the number of homeowners it will ultimately help. Residential mortgage defaults and foreclosures are at historical highs and Treasury officials and others have identified reducing the number of unnecessary foreclosures as critical to the current economic recovery. In our July 2009 report, we noted that Treasury’s estimate of the 3 to 4 million homeowners who would likely be helped under the HAMP loan modification program may have been overstated. Further, concerns have been raised about the capacity and consistency of servicers participating in HAMP in offering loan modifications to qualified homeowners facing potential foreclosure. Treasury has taken some actions to encourage servicers to increase the number of modifications made, including sending a letter to participating HAMP servicers and meeting with them to discuss challenges to making modifications. However, the ultimate result of Treasury’s actions to increase the number of HAMP loan modifications and the corresponding impact on stabilizing the housing market remains to be seen. Treasury faces other challenges in implementing HAMP, including ensuring that decisions to deny or approve a loan modification are transparent to borrowers and establishing an effective system of operational controls to oversee the compliance of participating servicers with HAMP guidelines. In July 2009, we made six recommendations to Treasury to help improve the transparency and accountability of HAMP, which included recommending actions to monitor particular program requirements, reevaluate and review certain program components and assumptions, and strengthen internal controls over HAMP. Treasury noted that it will take various actions in response to our recommendations, such as exploring options to monitor counseling requirements and working to refine its internal controls over HAMP. We plan to continue to monitor Treasury’s responses to our recommendations as part of our ongoing work on HAMP. Treasury announced PPIP in March 2009, but as of September 2009 many elements of the program remain unimplemented and some have questioned whether the program is actually needed. While Treasury continues to take steps to implement the legacy securities program, the legacy loans program has been on hold since early June. Some market participants and observers we spoke with questioned the necessity and timing of PPIP, noting that while the problem of toxic assets remains, the program is less important now than when the crisis first began, for several reasons. One main reason cited by these individuals and by Treasury and the FDIC is that rising investor confidence following the stress test results and successful capital-raising by financial institutions reduced the need for the legacy loans portion of PPIP. In addition, banks have increasing incentives to hold troubled assets in the hopes that such assets will perform better in the future, rather than taking losses now. Treasury has continued to make progress in establishing OFS’s management infrastructure, overseeing of contractors and financial agents, and developing a system of internal control for financial reporting. However, some challenges remain—for example, in staffing some key positions. In accordance with our prior recommendation that it expeditiously hire personnel to OFS, Treasury continued to use direct-hire and various other appointments to bring a number of staff on board quickly and has 197 staff as of September 15, 2009. However, it has yet to fill several key senior positions. For example, in our July 2009 report we recommended that Treasury give high priority to filling the Chief Homeownership Preservation Officer position. Treasury has also been seeking to fill the Chief Investment Officer position since June 2009. Neither position has been filled with permanent staff as of September 15, 2009. Treasury has strengthened management and oversight as reliance on contractors to support TARP grew over the past year. Treasury is using contracts and financial agency agreements with several private sector firms to obtain a wide range of professional services and other support. In starting up TARP a year ago, OFS’s management infrastructure lacked many of the necessary oversight procedures and internal controls for its growing number of contractors and financial agents, including a comprehensive and complete compliance system to monitor and appropriately address vendor-related conflicts of interest. However, Treasury has taken a number of steps toward overcoming a challenging contracting environment and has implemented or substantially implemented all of our contracting- and conflicts-of-interest recommendations we have made over the past year. OFS has also made progress in developing a comprehensive system of internal control, as we recommended. As required by section 116(b) of the act, we are currently performing the audit of TARP’s financial statements and the related internal controls. Our objectives are to render opinions on (1) the financial statements as of and for the period ending September 30, 2009, and (2) internal control over financial reporting and compliance with applicable laws and regulations as of September 30, 2009. We will also be reporting on the results of our tests of TARP’s compliance with selected provisions of laws and regulations related to financial reporting. The results of our financial statement audit will be published in a separate report. We also made a series of recommendations aimed at improving the transparency of TARP including that Treasury establish more effective communication with Congress and the public and develop a clearly articulated strategy for the program, among other things. Consistent with our recommendations, OFS has taken steps over the last year to formalize its communication strategy and improve its communications with Congress and the public about TARP activities and the strategy for using TARP funds. Consistent and timely communication will continue to be an important focus for Treasury as it makes key decisions on the remaining use of TARP funds. While isolating and estimating the effect of TARP programs continues to present a number of challenges, indicators of the cost of credit and perceptions of risk in credit markets suggest broad improvement since the announcement of CPP in October 2008. As we have noted in prior reports, if TARP is having its intended effect, a number of developments might be observed in credit and other markets over time, such as reduced risk spreads, declining borrowing costs, and more lending activity than there would have been in its absence. However, a slow recovery does not necessarily mean that TARP is failing, because it is not clear what would have happened without the programs. In particular, several market factors helping to explain slow growth in lending include weaknesses in securitization markets and the balance sheets of financial intermediaries, a decline in the demand for credit, and the reduced creditworthiness among borrowers. Nevertheless, as shown in table 2, credit market indicators we have been monitoring suggest there has been broad improvement in interbank, mortgage, and corporate debt markets in terms of the cost of credit and perceptions of risk (as measured by premiums over Treasury securities). In addition, empirical analysis of the interbank market, which showed signs of significant stress in 2008, suggests that CPP and other programs outside TARP that were announced in October 2008 have resulted in a statistically significant improvement in risk spreads even in the presence of other important factors. Although rising foreclosures continue to highlight the challenges facing the U.S. economy, total mortgage originations in the second quarter of 2009 have more than doubled since the fourth quarter of 2008. Basis point change since October 13, 2008 3-month London interbank offered rate (an average of interest rates offered on dollar-denominated loans) Though it is difficult to isolate the impact of TARP, economic and credit market indicators will provide important information as Treasury makes decisions about the future of the program. Treasury has recently released a report that begins to discuss the next phase of its stabilization and rehabilitation efforts and includes several indicators. Treasury’s authority to purchase or insure additional troubled assets will expire on December 31, 2009, unless the Secretary submits a written certification to Congress describing “why the extension is necessary to assist American families and stabilize financial markets, as well as the expected cost to the taxpayers for such an extension.” In the next few months, Treasury will need to make decisions about providing new funding for TARP programs. A set of indicators could serve as part of an analytical basis for such a determination. Mr. Chairman, Ranking Member Shelby, and Members of the Committee, I appreciate the opportunity to discuss these critically important issues and would be happy to answer any questions that you may have. Thank you. For further information on this testimony, please contact Thomas J. McCool on (202) 512-2642 or [email protected]. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses our work on the Troubled Asset Relief Program (TARP), under which the Department of the Treasury (Treasury), through the Office of Financial Stability (OFS), has the authority to purchase or insure almost $700 billion in troubled assets held by financial institutions. It focuses on (1) the nature and purpose of activities that have been initiated under TARP over the past year and ongoing challenges; (2) Treasury's efforts to establish a management infrastructure for TARP; and (3) outcomes measured by indicators of TARP's performance. TARP is one of many programs and activities the federal government has put in place over the past year to respond to the financial crisis. It represents a significant government commitment to stabilizing the financial system. For example, as of September 11, 2009, it had disbursed $363 billion to participating institutions. At the same time, TARP's Capital Purchase Program (CPP) has shown evidence of some success in returning funds to the federal government. Treasury has received almost $7 billion in dividend payments, about $2.9 billion in warrant liquidations, and over $70 billion in repurchases from institutions participating in CPP, as of August 31, 2009. But TARP still faces a variety of challenges. For example, CPP, the largest of the TARP programs, has hundreds of participating institutions. Because of its size, this program requires ongoing strong oversight to ensure that participants comply with the program's requirements as we have recommended in prior reports. In addition, most of the other investment-based TARP programs that have provided assistance to a few large individual institutions present Treasury with the challenge of determining when assistance is no longer needed. Further, amid concerns about the strategic direction of the program and lack of transparency, the new administration has attempted to provide a more strategic plan for using the remaining funds and has created a number of programs aimed at stabilizing the securitization markets and preserving homeownership. While some programs, such as the Term Asset-backed Securities Loan Facility (TALF), are fully operational, others including the Home Affordable Modification Program (HAMP) and the Public-Private Investment Program (PPIP), are still new and face ongoing implementation and operational challenges. Finally, even though substantial investments have been made to avert the collapse of American International Group, Inc. (AIG), General Motors Corporation (GM), and Chrysler LLC (Chrysler), the ultimate outcomes of these investments are unclear and will be influenced by the long-term viability of these entities. Certain of these TARP investments were made with Treasury's expectation that the disbursements would be returned to the federal government. HAMP funds, however, are direct expenditures which are not expected to be repaid. But given the many challenges and uncertainties facing TARP programs, the total cost to the government of these programs remains unclear at this time.
CBP has increased personnel—by 17 percent over its 2004 levels—and resources for border security at the POEs and reported some success in interdicting illegal cross-border activity. At the POEs, for example, CBP reported that deployment of imaging technology had increased seizures of drugs and other contraband. Between the POEs, Border Patrol reported that increased staffing and resources have resulted in some success in reducing the volume of illegal migration and increasing drug seizures. However, weaknesses in POE traveler inspection procedures and infrastructure increased the potential that dangerous people and illegal goods could enter the country, and that currency and firearms could leave the country. Border Patrol continues to face challenges in efforts to address the increasing threat from cross-border drug smuggling activity, with many drug seizures and apprehensions occurring some distance from the border. CBP does not have externally reported performance measures that reflect the results of its overall enforcement efforts at the border. In fiscal year 2010, before it discontinued the public reporting of performance measures showing border security progress, Border Patrol reported few border miles where it had the capability to deter or apprehend illegal activity at the immediate border. DHS is developing a new methodology and performance measures for border security and plans to implement them in fiscal year 2012. CBP reported that $2.7 billion was appropriated in fiscal year 2010 for border security at POEs, with a workforce of 20,600 CBP officers and 2,300 agriculture specialists. These CBP officers inspected 352 million travelers and nearly 106 million cars, trucks, buses, trains, vessels, and aircraft at over 330 air, sea, and land POEs. To facilitate inspections, the Western Hemisphere Travel Initiative (WHTI) generally requires all citizens of the United States and citizens of Canada, Mexico, and Bermuda traveling to the United States as nonimmigrant visitors to have a passport or other accepted document that establishes the bearer’s identity and nationality to enter the country from within the Western Hemisphere. CBP also deployed technology to assist officers in detecting illegal activity, providing 1,428 radiation portal monitors to screen for radiological or nuclear materials and mobile surveillance units, thermal imaging systems, and large-and small-scale Non-intrusive Inspection technology imaging systems to detect stowaways and materials such as explosives, narcotics, and currency in passenger vehicles and cargo. CBP reported that these resources have resulted in greater enforcement at the border. For example, CBP reported that deployment of imaging technology at POEs to detect stowaways or materials in vehicles and cargo had resulted in over 1,300 seizures, which included 288,000 pounds of narcotics. In fiscal year 2010, CBP reported turning away over 227,000 aliens who attempted to enter the country illegally; apprehending more than 8,400 people wanted for a variety of charges, to include serious crimes such as murder, rape, and child molestation; and seizing over 870,000 pounds of illegal drugs, $147 million in currency (inbound and outbound), more than 29,000 fraudulent documents, and more than 1.7 million prohibited plant materials, meat, and animal byproducts. Despite technology and other improvements in the traveler inspection program, our work has shown that vulnerabilities still exist. We reported in January 2008 that weaknesses remained in CBP’s inbound traveler inspection program and related infrastructure which increased the potential that dangerous people and illegal goods could enter the country. For example, CBP analyses indicate that several thousand inadmissible aliens and other violators entered the United States in fiscal year 2006. The weaknesses included challenges in attaining budgeted staffing levels because of attrition and lack of officer compliance with screening procedures, such as those used to determine citizenship and admissibility of travelers entering the country as required by law and CBP policy. Contributing factors included lack of focus and complacency, lack of supervisory presence, and lack of training. In this regard, the extent of continued noncompliance is unknown, and CBP management faces challenges in ensuring its directives are carried out. Another challenge was that CBP headquarters did not require field managers to share the results of their periodic audits and assessments to ensure compliance with the inspection procedures, hindering the ability of CBP management to efficiently use the information to overcome weaknesses in traveler inspections. To mitigate infrastructure weaknesses, such as the lack of vehicle barriers, CBP estimated in 2007 that it would need about $4 billion to make capital improvements at all 163 of the nation’s land crossings. CBP was also challenged by the fact that some POEs are owned by other governmental or private entities, adding to the time and complexity in addressing infrastructure problems. DHS concurred with our recommendations that CBP enhance internal controls in the inspection process, establish measures for training provided to CBP officers and new officer proficiency, and implement performance measures for apprehending inadmissible aliens and other violators; and indicated that CBP was taking steps to address the recommendations. CBP’s public outreach campaign has led to a high rate of compliance with WHTI’s document requirements, averaging more than 95 percent nationally throughout fiscal year 2010. CBP conducts queries against law enforcement databases for more than 95 percent of the traveling public, up from 5 percent in 2005. We reported in June 2010, however, that CBP officers at POEs are unable to take full advantage of the security features in WHTI documents because of time constraints, limited use of technology in primary inspection, and the lack of sample documents for training. For example, while CBP had deployed technology tools for primary inspectors to use when inspecting documents, it could make better usage of fingerprint data to mitigate the risk of imposter fraud with border crossing cards, the most common type of fraud. We are currently reviewing the training of CBP officers at POEs for the House Homeland Security Committee and the Senate Homeland Security and Governmental Affairs Committee and plan to report the results of this work later this year. In June 2009 and March 2011, we reported results of our review of CBP’s Outbound Enforcement Program intended to stem illegal cross-border smuggling of firearms and large volumes of cash used by Mexican drug- trafficking organizations, terrorist organizations, and other groups with malevolent intent. Under the program, CBP inspects travelers leaving the country at all 25 land ports of entry along the southwest border. On the northern border, inspections are conducted at the discretion of the Port Director. Available evidence indicated that many of the firearms fueling Mexican drug violence originated in the United States, including a number of increasingly lethal weapons, and the U.S. government faced several challenges in combating illicit sales of firearms in the United States and stemming their flow to Mexico. DOJ’s Bureau of Alcohol, Tobacco, Firearms and Explosives and DHS’s ICE are the primary agencies implementing efforts to address this issue. However, we reported in June 2009 that these agencies did not effectively coordinate their efforts, in part because the agencies lack clear roles and responsibilities and had been operating under an outdated interagency agreement. Additionally, these agencies generally had not systematically gathered, analyzed, and reported data that could be useful to help plan and assess results of their efforts to address arms trafficking to Mexico. Further, until June 2009, when the administration included a chapter on combating illicit arms trafficking to Mexico in its National Southwest Border Counternarcotics Strategy, various efforts undertaken by individual U.S. agencies were not part of a comprehensive U.S. governmentwide strategy for addressing the problem. DHS agreed with our recommendation that DHS and DOJ, among other agencies, improve interagency coordination, data gathering and analysis, and strategic planning and described steps it was undertaking to implement them. DOJ did not comment on the report. We previously reported that stemming the flow of bulk cash has been a difficult and challenging task. From March 2009 through February 22, 2011, as part of the Outbound Enforcement Program, CBP officers seized about $67 million in illicit bulk cash leaving the country at land POEs, almost all of which was seized along the southwest border. However, the National Drug Intelligence Center estimates that criminals smuggle $18 billion to $39 billion a year across the southwest border, and that the flow of cash across the northern border with Canada is also significant. CBP challenges we reported included limited hours of operation, technology, infrastructure, and procedures to support outbound inspection operations. For example, as of March 2011, license plate readers were available at 48 of 118 outbound lanes on the southwest border but none of the 179 outbound lanes on the northern border. CBP is in the early phases of this program and has not yet taken some actions to gain a better understanding of how well the program is working, such as gathering data for measuring program costs and benefits. Our March 2011 testimony also included information about regulatory gaps related to the stored value industry, including exemptions from anti- money laundering requirements for certain types of financial institutions and the lack of cross-border reporting requirements with regard to the use of stored value, such as prepaid cards. For example, individuals must report transporting more than $10,000 in currency or monetary instruments when crossing the U.S. border, but the Department of the Treasury’s Financial Crimes Enforcement Network (FinCen) does not have a similar requirement in place for individuals transporting stored value across U.S. borders. The Credit Card Accountability Responsibility and Disclosure Act of 2009 (Credit CARD Act) required the Secretary of the Treasury, in consultation with the Secretary of Homeland Security, to issue regulations in final form implementing the Bank Secrecy Act, regarding the sale, issuance, redemption, or international transport of stored value, including stored value cards. In doing so, the Credit CARD Act stated that Treasury may issue regulations regarding the international transport of stored value to include reporting requirements pursuant to the statute applicable to the transport of currency or monetary instruments. CBP and FinCEN concurred with our recommendations that they gather cost-benefit data and develop a plan to better manage rulemaking, respectively, and described actions they were taking to implement them. CBP reported that $3.6 billion was appropriated in fiscal year 2010 for border security efforts between the POEs, and that the Border Patrol is better staffed now than at any time in its 86-year history, having doubled the number of agents from 10,000 in fiscal year 2004 to more than 20,500 in fiscal year 2010. CBP also constructed 649 miles of pedestrian and vehicle fencing on the southwest border covering 33 percent of the border, and increased its investment in traffic checkpoints, the last layer of defense in Border Patrol’s effort to apprehend illegal activity that has crossed the border undetected. Border Patrol reported that apprehe had decreased nationwide by 36 percent from fiscal year 2008 (nearly 724,000) to fiscal year 2010 (approximately 463,000), indicating in its view that fewer people were attempting to illegally cross the border. However, during the same time that apprehensions decreased, mariju drug seizures increased almost 50 percent from over 1.6 million pounds in fiscal year 2008 to about 2.4 million pounds in fiscal year 2010, and CBP has been challenged to link its investments to changes in border contro l. We reported in May 2010 that CBP had not accounted for the impact of its investment in border fencing and infrastructure on border security. Border fencing was designed to prevent people on foot and vehicles from crossing the border and to enhance Border Patrol agents’ ability to respond to areas of illegal entry. CBP estimated that the border fencing had a life cycle of 20 years and over these years, a total estimated cost of about $6.5 billion to deploy, operate, and maintain the fencing and other infrastructure. According to CBP, during fiscal year 2010, there were 4,037 documented and repaired breaches of the fencing and CBP spent at least $7.2 million to repair the breaches, or an average of about $1,800 per breach. CBP reported an increase in control of southwest border miles, but could not account separately for the impact of the border fencing and other infrastructure. In our May 4, 2010, testimony, we concluded that until CBP determines the contribution of border fencing and other infrastructure to border security, it is not positioned to address the impact of its investment; and reported that in response to a prior recommendation, CBP was in the process of conducting an analysis of the impact of tactical infrastructure on border security. Traffic checkpoints contributed to furthering the Border Patrol mission to protect the border. In 2008, they accounted for about 35 percent of Border Patrol drug seizures along the southwest border and 17,000 apprehensions of illegal aliens, including 3 individuals identified as persons linked to terrorism. However, we reported in August 2009 that Border Patrol did not have measures to determine if these checkpoints were operating effectively and efficiently, and weaknesses in checkpoint design and operation increased the risk that illegal activity may travel to the U.S. interior undetected. Border Patrol officials said that several factors impeded higher levels of performance, including insufficient staff, canine teams, and inspection technology. Other challenges included insufficient guidance to ensure that new checkpoints were appropriately sized, lack of management oversight and guidance to ensure consistent data collection practices, and a lack of performance measures to determine if checkpoints were operating efficiently and effectively with minimal adverse impact on local communities. CBP agreed with our recommendations to take several actions to strengthen checkpoint design and staffing, and improve the measurement and reporting of checkpoint effectiveness, including community impact and identified actions planned or underway to implement the recommendations. As of fiscal year 2011, CBP no longer has externally reported performance goals or measures that reflect its overall success in detecting illegal entries and contraband at and between the POEs, but the measures for fiscal year 2010 showed few land border miles are at a level of control where deterrence or apprehensions of illegal entries occurs at the immediate border. Border Patrol is in the process of developing a new methodology and performance measures, however, for assessing border security between the POEs. Further, OFO has multiple performance measures in place, but it does not have an external measure that captures the results of its overall enforcement efforts at POEs. In fiscal year 2009, however, OFO used a statistical model to report that over 99 percent of travelers in passenger vehicles passing through the southwest and northern land border POEs were compliant with U.S. laws, rules, and regulations. For the less than 1 percent of travelers who comprised the noncompliant population, OFO officials reported in the CBP Fiscal Year 2009 Performance and Accountability Report a goal to apprehend at least 28 percent of serious criminal activities—such as transporting illegal drugs, guns, or other banned substances in fiscal year 2009, the last year this information was publicly available. OFO officials said that they considered this an effective performance measure and that at the end of fiscal year 2009, the land border POEs had achieved that goal. As we reported in December 2010 and February 2011, and through selected updates, the Border Patrol is in the process of developing new performance measures for assessing border security between the POEs. However, up until fiscal year 2011, Border Patrol used a security performance measure of border miles under control to assess security between the POEs, which reflected its ability to deter or detect and apprehend illegal entries at the border or after they occur. As we testified in February 2011 about our preliminary observations on this measure, Border Patrol indicated that in fiscal year 2010, 873 of the nearly 2,000 southwest border miles and 69 of the nearly 4,000 northern border miles between Washington and Maine were at an acceptable level of control. Within this border security classification, Border Patrol further distinguished between the ability to deter or detect and apprehend illegal entries at the immediate border versus after entry—at distances of up to 100 miles or more away from the immediate border—into the United States. Our preliminary analysis of these Border Patrol data showed that the agency reported a capability to deter or detect and apprehend illegal entries at the immediate border across 129 of the 873 southwest border miles and 2 of the 69 northern border miles. Our preliminary analysis also showed that Border Patrol reported the ability to deter or detect and apprehend illegal entries after they crossed the border for an additional 744 southwest border miles and 67 northern border miles. As we previously observed in December 2010 and February 2011, and through selected updates, Border Patrol determined in fiscal year 2010 that border security was not at an acceptable level of control for 1,120 southwest border miles and 3,918 northern border miles, and that on the northern border there was a significant or high degree of reliance on enforcement support from outside the border zones for detection and apprehension of cross-border illegal activity. For two-thirds of these southwest miles, Border Patrol reported that the probability of detecting illegal activity was high; however, the ability to respond was defined by accessibility to the area or availability of resources. One-fourth of these northern border miles were also reported at this level. The remaining southwest and northern border miles were reported at levels where lack of resources or infrastructure inhibited detection or interdiction of cross- border illegal activity. In our February 2011 testimony regarding our observations on Border Patrol security measures, and through selected updates, we noted that in fiscal year 2011 DHS discontinued the public reporting of performance measures showing border security progress, while it develops and implements a new methodology and measures for border security. In the meantime Border Patrol is reporting on the number of agents and joint operations on the southwest border and the number of apprehensions. CBP does not have an estimate of the time and effort needed to secure the southwest border; however, the agency expects new border security measures to be in place by fiscal year 2012 which will enable it to make such an estimate. DHS, CBP, and Border Patrol headquarters officials said that the new approach to border security between the POEs is expected to be more flexible and cost-effective, and that Border Patrol officials expect that they will be requesting fewer resources to secure the border. Federal, state, local, tribal, and Canadian law enforcement partners reported improved DHS coordination to secure the border. For example, interagency forums were beneficial in establishing a common understanding of border security threats, while joint operations helped to achieve an integrated and effective law enforcement response. However, critical gaps remained in sharing information and resources useful for operations, such as daily patrols in vulnerable areas, including National Parks and Forests. Our past work has shown that additional actions to improve coordination could enhance border security efforts on the southwest and northern borders, including those to deter alien smuggling. Illegal cross-border activity remains a significant threat to federal lands protected by DOI and USDA law enforcement personnel on the southwest and northern borders and can cause damage to natural, historic, and cultural resources, and put agency personnel and the visiting public at risk. We reported in November 2010 that information sharing and communication among DHS, DOI, and USDA law enforcement officials had increased in recent years. Interagency forums were used to exchange information about border issues and interagency liaisons facilitated exchange of operational statistics. However, critical gaps remained in implementing interagency agreements to ensure law enforcement officials had access to daily threat information and compatible secure radio communications needed to better ensure officer safety and an efficient law enforcement response to illegal activity. This was important in Border Patrol’s Tucson sector on the southwest border, where apprehensions on federal lands had not kept pace with the estimated number of illegal entries, indicating that threats caused by drug smugglers and illegal migration may be increasing. Federal land managers in the Tucson sector said they would like additional guidance to determine when illegal cross-border activity poses a sufficient public safety risk to restrict or close access to federal lands. In Border Patrol’s Spokane sector on the northern border, coordination of intelligence information was particularly important due to sparse law enforcement presence and technical challenges that precluded Border Patrol’s ability to fully assess cross-border threats, such as air smuggling of high-potency marijuana. The agencies agreed with our recommendations that DOI and USDA determine if more guidance is needed for federal land closures and that DHS, DOI, and USDA provide oversight and accountability as needed to further implement interagency agreements for coordinating information and integrating operations. In January 2011, CBP issued a memorandum to all Border Patrol division chiefs and chief patrol agents emphasizing the importance of USDA and DOI partnerships to address border security threats on federal lands. This action is a positive step toward implementing our recommendations and we encourage DHS, DOI, and USDA to take the additional steps necessary to monitor and uphold implementation of the existing interagency agreements in order to enhance border security on federal lands. DHS has stated that partnerships with other federal, state, local, tribal, and Canadian law enforcement agencies are critical to the success of northern border security efforts. We reported in December 2010 that DHS efforts to coordinate with these partners through interagency forums and joint operations were considered successful, according to a majority of these partners we interviewed. In addition, DHS component officials reported that federal agency coordination to secure the northern border was improved. However, DHS did not provide oversight for the number and location of forums established by its components and numerous federal, state, local, and Canadian partners cited challenges related to the inability to resource the increasing number of forums, raising concerns that some efforts may be overlapping. In addition, federal law enforcement partners in all four locations we visited as part of our work cited ongoing challenges between Border Patrol and ICE, Border Patrol and Forest Service, and ICE and DOJ’s Drug Enforcement Administration in sharing information and resources that compromised daily border security related to operations and investigations. DHS had established and updated interagency agreements to address ongoing coordination challenges; however, oversight by management at the component and local level has not ensured consistent compliance with provisions of these agreements. We also reported that while Border Patrol’s border security measures reflect that there is a high reliance on law enforcement support from outside the border zones, the extent of partner law enforcement resources that could be leveraged to fill Border Patrol resource gaps, target coordination efforts, and make more efficient resource decisions are not reflected in Border Patrol’s processes for assessing border security and resource requirements. We previously reported in November 2008 that DHS was not fully responsive to a legislative reporting requirement to identify resources needed to secure the northern border. Specifically, the Implementing Recommendations of the 9/11 Commission Act of 2007 required the Secretary of Homeland Security to submit a report to Congress that addresses the vulnerabilities along the northern border, and provides recommendations and required resources to address them. DHS agreed with our recommendations to provide guidance and oversight for interagency forums and for component compliance with interagency agreements, and develop policy and guidance necessary to integrate partner resources in border security assessments and resource planning documents. DHS also reported that it was taking action to address these recommendations. Information is a crucial tool in securing the nation’s borders against crimes and potential terrorist threats. In many border communities, the individuals who are best positioned to observe and report suspicious activities that may be related to these threats are local and tribal law enforcement officers. We reported in December 2009 that 15 of 20 local and tribal law enforcement agencies in southwest or northern communities we contacted during our work said they received information directly from Border Patrol, ICE, or from DOJ’s Federal Bureau of Investigation that was useful for enhancing their situational awareness of crimes along the border and potential terrorist threats. However, 5 of the 20 agencies reported that they did not receive information from the federal agencies, in part, because information-sharing partnerships and related mechanisms to share information did not exist. In addition, officials from 13 of the 20 agencies in border communities said that they did not clearly know what suspicious activities federal agencies wanted them to report, how to report them, or to whom because federal agencies had not provided necessary guidance. We recommended that DHS and DOJ more fully identify the information needs of and establish partnerships with local and tribal officials along the borders, identify promising practices in developing border intelligence products, and define the suspicious activities that local land tribal officials in border communities are to report and how to report them. DHS agreed with the recommendations and indicated that it was taking action to implement them. DOJ did not comment. Alien smuggling along the southwest border is a growing threat to the security of the United States and Mexico due, in part, to the expanding involvement of Mexican drug trafficking organizations and aliens who illegally enter the region from countries of special interest to the United States such as Afghanistan, Iran, Iraq, and Pakistan. Violence associated with alien smuggling has also increased in recent years, particularly in Arizona. In October 2007, the National Drug Intelligence Center reported that the success of expanding border security initiatives and additional Border Patrol resources are likely obstructing regularly used smuggling routes and fueling an increase in violence, particularly against law enforcement officers in Arizona. We reported in May 2010 and testified in July 2010, that ICE may be missing an opportunity to leverage techniques used by the Arizona Attorney General to disrupt alien smuggling operations. Specifically, an Arizona Attorney General task force seized millions of dollars and disrupted alien smuggling operations by following cash transactions flowing through money transmitters that serve as the primary method of payment to those individuals responsible for smuggling aliens. By analyzing money transmitter transaction data, task force investigators identified suspected alien smugglers and those money transmitter businesses that were complicit in laundering alien smuggling proceeds. An overall assessment of whether and how these techniques may be applied by ICE in the context of disrupting alien smuggling could help ensure that it is not missing opportunities to take additional actions and leverage resources to support the common goal of countering alien smuggling. We recommended that ICE assess the Arizona Attorney General’s financial investigations strategy to identify any promising investigative techniques for federal use. ICE concurred with our recommendation and outlined specific steps it was taking to implement it. In January 2011, the Secretary of Homeland Security announced a new direction in deploying technology to assist in securing the border, ending the SBInet program as originally conceived because it did not meet cost- effectiveness and viability standards. Since fiscal year 2006, DHS had allocated about $1.5 billion for SBInet that would provide a mix of sensors, radars, and cameras on fixed towers that could gather information along the border and transmit this information to terminals in command centers to provide agents with border situational awareness. Our previous reports on CBP’s SBI program have outlined program challenges and delays. Specifically, the initial segment of SBInet technology, Project 28, encountered performance shortfalls and delays, including the following: users were not involved in developing the requirements, contractor oversight was limited, and project scope and complexity were underestimated. Program uncertainties, such as a lack of fully defined program expectations, continued to delay planned SBInet deployments following Project 28. In addition, the deployment of related infrastructure, such as towers and roads, experienced challenges, such as increased costs, unknown life-cycle costs, and land acquisition issues. As part of her decision to end SBInet, the Secretary of Homeland Security directed CBP to proceed with a new plan to deploy a mix of technology to protect the border called Alternative (Southwest) Border Technology. Under this plan, CBP is to focus on developing terrain- and population- based solutions utilizing existing, proven technology, such as camera- based surveillance systems, for each border region. Accordingly, the plan is to incorporate a mix of technology, including an Integrated Fixed Tower surveillance system similar to that used in the current SBInet system (i.e., a tower with cameras and radar that transmit images to a central location), beginning with high-risk areas in Arizona. According to this new plan, DHS is to deploy other technologies, including Remote Video Surveillance Systems (RVSS), Mobile Surveillance Systems (MSS), and hand-held equipment for use by Border Patrol agents. For fiscal year 2011, DHS plans to use about $159 million to begin buying RVSSs, MSSs, unattended ground sensors, and hand-held devices for Arizona. The President’s fiscal year 2012 budget request calls for $242 million to fund three of five planned deployments of the Integrated Fixed Tower systems in Arizona although, depending on funding, the earliest DHS expects the deployments to begin is March 2013 with completion anticipated by 2015 or later. The estimated cost for the overall plan’s Arizona component, called the Arizona Technology Plan, is about $734 million, of which $575 million is for the Integrated Fixed Tower component. To arrive at an appropriate mix of technology in its plan, DHS performed an Analysis of Alternatives (AOA). In March 2011, we provided preliminary observations regarding this analysis. Specifically, we noted that on the basis of our ongoing review of available information to date, there were several areas that raise questions about how the AOA results were used to inform Border Patrol judgments about moving forward with technology deployments, including the Integrated Fixed Tower system. For example, the AOA cited a range of uncertainties in costs related to the operational effectiveness of the four technology alternatives considered (mobile, fixed tower, agent equipment, and aerial alternatives) in each of the four geographic analysis areas, meaning there was no clear-cut cost- effective technology alternative for any of the analysis areas. Yet, the AOA observed that a fixed tower alternative may represent the most effective choice only in certain circumstances. Further, we have questions about how the AOA analyses were factored into planning and budget decisions regarding the optimal mix of technology deployments in Arizona. Specifically, we have not yet examined the Border Patrol’s operational assessment to determine how the results of the AOA were considered in developing technology deployment planning in Arizona and, in turn, the fiscal year 2012 budget request. The cost and effectiveness uncertainties noted above raise questions about the decisions that informed the budget formulation process. We are continuing to assess this issue for the House Homeland Security Committee and will report the final results later this year. DHS took action to better monitor and control the entry and exit of foreign visitors to the United States by establishing the U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT) program, that tracks foreign visitors using biometric information (such as fingerprints) and biographic information. DHS has incrementally delivered US-VISIT capabilities to track foreign entries, and a biometrically enabled entry capability has been fully operational at about 300 air, sea, and land POEs since December 2006. In November 2009, we reported that, according to DHS, US-VISIT entry operations have produced results. For example, as of June 2009, the program reported that it had more than 150,000 biometric hits in entry resulting in more than 8,000 people having adverse actions, such as denial of entry, taken against them. Since 2004, however, we have identified a range of DHS management challenges to fully deploy a biometric exit capability intended, in part, to track foreigners who had overstayed their visas and remained illegally in the United States. For example, in November 2009 we reported that DHS had not adopted an integrated approach to scheduling, executing, and tracking the work that needs to be accomplished to deliver a comprehensive exit solution. Most recently, in August 2010 we reported that the DHS pilot programs to track the exit of foreign visitors at air POEs had limitations curtailing the ability to inform a decision for a long-term exit solution at these POEs. We made recommendations to ensure that US-VISIT exit was planned, designed, developed, and implemented in an effective and efficient manner. DHS generally agreed with our recommendations and outlined actions designed to implement them. Chairman Lieberman, Ranking Member Collins, and members of the committee, this concludes my prepared statement. I will be happy to answer any questions you may have. For further information regarding this testimony, please contact Richard M. Stana at (202) 512-8777 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Cindy Ayers, Seto Bagdoyan, and Mike Dino, Assistant Directors; as well as Joel Aldape, Frances Cook, Kevin Copping, Katherine Davis, Justin Dunleavy, Rick Eiserman, Michele Fejfar, Barbara Guffy, Nancy Kawahara, Brian Lipman, Dawn Locke, and Taylor Matheson. Border Security: Preliminary Observations on the Status of Key Southwest Border Technology Programs. GAO-11-448T. Washington, D.C.: March 15, 2011. Moving Illegal Proceeds: Opportunities Exist for Strengthening the Federal Government’s Efforts to Stem Cross-Border Currency Smuggling. GAO-11-407T. Washington, D.C.: March 9, 2011. Border Security: Preliminary Observations on Border Control Measures for the Southwest Border. GAO-11-374T. Washington, D.C.: February 15, 2011. Border Security: Enhanced DHS Oversight and Assessment of Interagency Coordination Is Needed for the Northern Border. GAO-11-97. Washington, D.C.: December 17, 2010. Border Security: Additional Actions Needed to Better Ensure a Coordinated Federal Response to Illegal Activity on Federal Lands. GAO-11-177. Washington, D.C.: November 18, 2010. Moving Illegal Proceeds: Challenges Exist in the Federal Government’s Effort to Stem Cross-Border Currency Smuggling. GAO-11-73. Washington, D.C.: October 25, 2010. Secure Border Initiative: DHS Needs to Strengthen Management and Oversight of Its Prime Contractor. GAO-11-6. Washington, D.C.: October 18, 2010. Homeland Security: US-VISIT Pilot Evaluations Offer Limited Understanding of Air Exit Options. GAO-10-860. Washington, D.C.: August 10, 2010. U.S. Customs and Border Protection: Border Security Fencing, Infrastructure and Technology Fiscal Year 2010 Expenditure Plan. GAO-10-877R. Washington, D.C.: July 30, 2010. Alien Smuggling: DHS Could Better Address Alien Smuggling along the Southwest Border by Leveraging Investigative Resources and Measuring Program Performance. GAO-10-919T. Washington, D.C.: July 22, 2010. Border Security: Improvements in the Department of State’s Development Process Could Increase the Security of Passport Cards and Border Crossing Cards. GAO-10-589. Washington, D.C.: June 1, 2010. Alien Smuggling: DHS Needs to Better Leverage Investigative Resources and Measure Program Performance along the Southwest Border. GAO-10-328 (Washington, D.C.: May 24, 2010) Secure Border Initiative: DHS Needs to Reconsider Its Proposed Investment in Key Technology Program. GAO-10-340. Washington, D.C.: May 5, 2010. Secure Border Initiative: DHS Has Faced Challenges Deploying Technology and Fencing Along the Southwest Border. GAO-10-651T. Washington, D.C.: May 4, 2010. Information Sharing: Federal Agencies Are Sharing Border and Terrorism Information with Local and Tribal Law Enforcement, but Additional Efforts are Needed. GAO-10-41. Washington, D.C.: December 18, 2009. Homeland Security: Key US-VISIT Components at Varying Stages of Completion, but Integrated and Reliable Schedule Needed. GAO-10-13. Washington, D.C.: November 19, 2009. Secure Border Initiative: Technology Deployment Delays Persist and the Impact of Border Fencing Has Not Been Assessed. GAO-09-896. Washington, D.C.: September 9, 2009. Border Patrol: Checkpoints Contribute to Border Patrol’s Mission, but More Consistent Data Collection and Performance Measurement Could Improve Effectiveness. GAO-09-824. Washington, D.C.: August 2009. Firearms Trafficking: U.S. Efforts to Combat Arms Trafficking to Mexico Face Planning and Coordination Challenges. GAO-09-709. Washington, D.C.: June 18, 2009. Northern Border Security: DHS’s Report Could Better Inform Congress by Identifying Actions, Resources, and Time Frames Needed to Address Vulnerabilities. GAO-09-93. Washington, D.C.: November 25, 2008. Secure Border Initiative: DHS Needs to Address Significant Risks in Delivering Key Technology Investments. GAO-08-1086. Washington, D.C.: September 22, 2008. Secure Border Initiative: Observations on Deployment Challenges. GAO-08-1141T. Washington, D.C.: September 10, 2008. Secure Border Initiative: Observations on the Importance of Applying Lessons Learned to Future Projects. GAO-08-508T. Washington, D.C.: February 27, 2008. Border Security: Despite Progress, Weaknesses in Traveler Inspections Exist at Our Nation’s Port of Entry. GAO-08-329T. Washington, D.C.: January 3, 2008. Border Security: Despite Progress, Weaknesses in Traveler Inspections Exist at Our Nation’s Ports of Entry. GAO-08-219. Washington: D.C.: November 5, 2007. Secure Border Initiative: Observations on Selected Aspects of SBInet Program Implementation. GAO-08-131T. Washington, D.C.: October 24, 2007. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As part of its mission, the Department of Homeland Security (DHS), through its U.S. Customs and Border Protection (CBP) component, is to secure U.S borders against threats of terrorism; the smuggling of drugs, humans, and other contraband; and illegal migration. At the end of fiscal year 2010, DHS investments in border security had grown to $11.9 billion and included more than 40,000 personnel. To secure the border, DHS coordinates with federal, state, local, tribal, and Canadian partners. This testimony addresses DHS (1) capabilities to enforce security at or near the border, (2) interagency coordination and oversight of information sharing and enforcement efforts, and (3) management of technology programs. This testimony is based on related GAO work from 2007 to the present and selected updates made in February and March 2011. For the updates, GAO obtained information on CBP performance measures and interviewed relevant officials. CBP significantly increased personnel and resources for border security at and between the ports of entry (POE), and reported some success in interdicting illegal cross-border activity; however, weaknesses remain. At the POEs, for example, CBP reported that deployment of imaging technology to detect stowaways or cargo had increased seizures of drugs and other contraband, and between the POEs, increased staffing, border fencing, and technology have resulted in some success in reducing the volume of illegal migration and increasing drug seizures. However, as GAO reported from 2007 through 2011, weaknesses in POE traveler inspection procedures and infrastructure increased the potential that dangerous people and illegal goods could enter the country; and that currency and firearms could leave the country and finance drug trafficking organizations and sponsors of terrorism. CBP used a performance measure to reflect results of its overall border enforcement efforts, which showed few land border miles where they have the capability to deter or apprehend illegal activity at the immediate border in fiscal year 2010. DHS is developing a new methodology and performance measures for border security and plans to implement them in fiscal year 2012. As GAO reported in 2010, federal, state, local, tribal, and Canadian law enforcement partners reported improved DHS coordination to secure the border, but critical gaps exist. For example, interagency forums helped in establishing a common understanding of border security threats, while joint operations helped to achieve an integrated and effective law enforcement response. However, significant gaps remained in sharing information and resources useful for operations, such as daily patrols in vulnerable areas, like National Parks and Forests. As GAO reported, and made related recommendations, improved coordination provides opportunity to enhance border security efforts on the southwest and northern borders, including those to deter alien smuggling. CBP's Border Patrol component is moving ahead with a new technology deployment plan to secure the border, but cost and operational effectiveness and suitability are not yet clear. In January 2011, the Secretary of Homeland Security announced a new direction to deploying technology to assist in securing the border. The decision ended the Secure Border Initiative Network technology program--one part of a multiyear, multibillion dollar effort aimed at securing the border through technology such as radar, sensors, and cameras and infrastructure such as fencing. Under a new plan, called Alternative (Southwest) Border Technology, Border Patrol is to develop terrain- and population-based solutions using existing proven technology, such as camera-based surveillance systems. However, the analysis DHS performed to arrive at an appropriate mix of technology in its new plan raises questions. For example, the analysis cited a range of uncertainties in costs and effectiveness, with no clear-cut cost effective technology alternative among those considered, as GAO reported in preliminary observations in March 2011. GAO will continue to assess this issue and report its results later this year. GAO is not making any new recommendations in this testimony. However, GAO has previously made recommendations to DHS to strengthen border security, including enhancing measures to protect against the entry of terrorists, inadmissible aliens, and contraband; improving interagency coordination; and strengthening technology acquisition and deployment plans. DHS generally concurred with these recommendations and has actions underway or planned in response.
Beginning in 1993, mining fees have included an annual $100 mining maintenance fee on unpatented mining claims and sites and a $25 location fee on new claims and sites. The maintenance fees are collected in lieu of the annual $100 worth of labor or improvements required by the Mining Law of 1872. In addition, the Department of the Interior appropriations act for fiscal year 1993 and each fiscal year since established an amount of BLM’s appropriation for MLR to be used for MLAP operations. However, the appropriations acts require that the mining fees that BLM collects be credited against the MLR appropriation used for MLAP operations until all MLR funds used for MLAP are “repaid.” To the extent that fees are insufficient to fully credit the MLR appropriation, the MLR appropriation absorbs the difference and therefore partially funds MLAP. At the end of the fiscal year, BLM issues a reverse warrant to the Treasury for the amount of fees collected. MLAP operations deal with locatable minerals, which include base and precious metals (also called “hardrock minerals”), on public lands. MLAP operations do not include work on nonlocatable or common variety minerals, such as sand or gravel, or oil and gas work. MLAP operations do include: reviewing and approving plans and notices of mining operations, conducting a comprehensive program of inspections and enforcement to ensure compliance with the terms of plans and notices of operation and related state and local regulations, identifying and eliminating cases of unauthorized occupancy of mining conducting validity examinations of mining operations in order to eliminate cases of mineral trespass, completing mineral examinations and processing of the “grandfathered” mineral patent applications, collecting and processing mining fees, and collecting and processing waivers of maintenance fees. We examined how labor was charged to MLAP by BLM during the first 10 months of fiscal year 2000. We also examined BLM’s methodology for identifying contracts and services that were improperly charged to MLAP during fiscal years 1998 and 1999 and evaluated the processes and procedures developed to correct the improper charges. We also asked employees whether they were familiar with the source of MLAP funding. To accomplish our first objective, we obtained records for MLAP’s fiscal year 2000 collections and labor obligations from BLM’s accounting and payroll systems and conducted a statistically representative survey o 125 BLM employees who charged time to MLAP during the period of October 1, 1999, to July 15, 2000. Our review focused on nine of BLM’s administrative states and offices: Alaska, California, Colorado, Eastern States, Nevada, New Mexico, Oregon, Utah, and Washington Office. To accomplish our second objective, we reviewed the documentation used in BLM’s review of the contracts and services over $1,500 charged to MLAP for fiscal years 1998 and 1999, a total of 491 contracts. These 491 contracts represented over $8.0 million, or 89.3 percent, of the contracts and services charged to MLAP during this time period. For our third objective we included a question in our survey asking employees whether they were aware of the source of funding for MLAP. We did not independently verify the reliability of the accounting data provided nor did we trace the data to the systems from which they came. We conducted our work from August 2000 through December 2000 in accordance with generally accepted government auditing standards. A detailed discussion of our objectives, scope, and methodology is contained in appendix I of this report. BLM provided written comments on a draft of our report. The comments have been incorporated as appropriate and are reprinted as appendix II. We considered but did not reprint the attachment referred to in BLM’s comment letter. The results of our survey indicated that BLM employees’ hours charged to MLAP were not a reliable record of hours actually worked on that program. According to employees, hours were often charged to MLAP in excess of hours worked or for work unrelated to mining. In addition, individuals received bonuses or awards from MLAP funds although they charged no labor to the program. An accurate accounting of MLAP costs is crucial for proper program management and accountability and serves as a basis for estimating future costs when preparing and reviewing budgets. Proper tracking and recording of MLAP costs is especially important since this program is partially funded through mining fees that Congress has made available only for mining law administration program operations. “Charging work tasks, employee salaries, procurement or contract items, or equipment purchases to any subactivity other than the benefiting subactivity violates the terms of the Appropriations Act. Similarly, when procurements are charged to a given subactivity simply because “money is available there” but have no direct relationship to subactivity’s program accomplishment, is a violation of the integrity of managers’ financial management responsibility and both the specific policy decisions and the direction of proper authorities in setting those requirements. Future year program needs and requirements are based in part on the record of past years’ costs and accomplishments. Therefore, records of actual costs and accomplishments must be accurate as possible.” While approximately one-half of the BLM employees stated that they worked and charged the same amount of time to MLAP, we found that 38.9 percent charged more time to MLAP than they actually worked. Of this 38.9 percent, 17.6 percent of the employees stated that they did not work on MLAP at all during the 10 months of our study period, although a portion of their labor had been charged to the program. In contrast, approximately 11.4 percent of BLM employees reported charging less time to MLAP than they actually worked. These results are summarized in table 1. Our survey results showed that there were wide variations between the hours worked and the hours charged to the program among the employees who did not work and charge the same amount of time to MLAP. For example, we found three respondents who reported working 50, 70, and 75 percent of their time on MLAP during our study period, but charged 100 percent of their time to the program. One respondent reported doing no work on MLAP and charging 60 percent of the work hours to the program. In contrast, another respondent reported working 60 percent of the time on MLAP and charging only 11 percent to the program. From our survey results, we were able to compute the total number of hours worked and charged to the program and calculate an estimated net overcharge to the program of about 10.8 percent. Based on this percentage being applied to the 10-month MLAP payroll of approximately $11.4 million for the nine administrative states and offices under review, we estimate the potential net dollar overcharge to MLAP to be about $1.2 million. The total MLAP payroll for all states and offices was approximately $24.0 million for fiscal year 2000. We asked the employees who stated that they charged more time to MLAP than they worked to explain why the additional time was charged to MLAP. During survey pretesting we had identified four possible explanations, including: (1) time was charged based on the funding allocations (in other words, charges were made to subactivities from which funds were available for obligation rather than from the subactivity related to the task), (2) time was charged based on the directions of a supervisor, (3) time was charged based on the directions of a budget officer, and (4) no other codes were available to charge (for example, the task being done may not have been anticipated in the budget allocation and therefore the proper subactivity code was not available to the office). The results of their responses are presented in table 2. Employees could provide more than one explanation for the overcharging; therefore the percentages in the table total to more than 100 percent. In addition, we asked the employees who stated that they charged more time to MLAP than they worked to specify the tasks that they had charged to MLAP. They reported charging time for such non-MLAP related tasks as processing and approving applications to drill oil and gas wells, working on environmental remediation projects, doing recreation management, and performing vehicle maintenance. BLM officials stated that work involving these tasks should not have been charged to MLAP. Employees also stated that they had charged MLAP for labor involved in preparing mineral reports for land exchanges and conducting work on common variety minerals, such as sand and gravel. BLM officials stated that charging these tasks to MLAP would be improper except for specific and unique cases—for example, preparing a mineral report on a land exchange involving a mining claim or making a validity determination on a mining claim involving sand and gravel deposits. Our survey did not address whether the tasks charged to MLAP were for any of these specific and unique cases. Some BLM employees expressed uncertainty as to which tasks were appropriate to charge to MLAP. For example, some employees stated that work on mineral reports for land exchanges or abandoned mine lands should not be charged to MLAP, while other employees told us that they believed that any mineral-related tasks, including work on sand and gravel operations, could be properly charged to MLAP. All of the employees were asked whether they alone determined which subactivity, including MLAP, would be charged for their labor. In our study population, 68.5 percent of the employees responded that they had received directions as to which subactivity to charge. Of these 68.5 percent, approximately 66.3 percent stated that they received either written or verbal direction from their supervisor and 55.7 percent stated that they received either written or verbal direction from the budget officer/official. In total, 93.7 percent of these employees received written or verbal direction from either a supervisor or budget officer. Labor obligations represent a significant portion of total MLAP obligations—about 73.6 percent in fiscal year 2000—therefore, improperly charging labor to MLAP could result in the Congress and program managers using program cost information that is significantly misstated. From BLM’s accounting records we identified 27 individuals who received approximately $34,000 in bonuses or awards financed by MLAP funds, but who had not charged any hours to the program. Because there were no hours charged, these individuals were excluded from our survey; however, we did contact BLM officials to determine the reasons for nine of the awards and bonuses. As stated previously, BLM’s policy is that labor associated with any task should be charged to the subactivity benefiting from that labor. In addition, we interviewed BLM’s Director of Budget to determine BLM’s policy for charging bonuses and awards. He stated that any bonuses and awards received as a result of the labor performed should also be charged to the subactivity that benefited from that labor. We found that five awards had been given to employees working on a special project requiring the collection of mining claim documents for the Department of the Interior’s Office of the Solicitor General. They received awards from MLAP funds, even though the hours and associated labor for the special project were not charged to MLAP. BLM officials stated that charging these awards to MLAP was appropriate and that the associated labor should also have been charged to the program. Not charging the associated labor costs to MLAP resulted in program costs being understated. The remaining four awards were given to individuals for (1) researching historical land data for a land withdrawal program, (2) assisting in the moving of a BLM office to a new facility, (3) selling a private residence as part of a lateral transfer and not using BLM’s relocation service, and (4) performance resulting in an end-of-year bonus to a Lands and Realty specialist. When asked why these four individual bonuses and awards had been charged to MLAP, BLM officials either could provide no explanation or stated that MLAP had been charged by mistake. These over- and under-charges to MLAP further distort the cost of the program and undermine the usefulness of MLAP operating data for decision-making or performance reporting purposes. As a result of findings discussed with BLM management and provided in an April 2000 congressional briefing, the Acting Director of BLM directed BLM’s administrative states and offices to review contracts and services costing over $1,500 that were charged to MLAP during fiscal years 1998 and 1999. The contracts reviewed represented over $8.0 million, or 89.3 percent, of the contracts and services obligated to MLAP during this time period. The methodology BLM used to identify contracts and services improperly charged to MLAP during fiscal years 1998 and 1999 was reasonable and resulted in BLM determining that about $716,000 in contracts and services should not have been charged to MLAP. These improper payments included: over $34,000 for janitorial services, $30,000 for the appraisal of federal coal leaseholds, $25,000 for an attorney in an Equal Employment Opportunity settlement for an employee who had not worked on MLAP tasks, $2,800 for a cultural survey of an area prior to an off-highway vehicle and motorcycle race, and $2,000 for a habitat survey of a threatened and endangered species of butterfly in an area with no active mining. In addition, on the basis of our review, we questioned whether an additional $40,000 for two contracts and services were improperly charged to MLAP. These contracts and services were for a cooperative agreement for Geographic Information System (GIS) support and a biological survey. BLM officials concurred and stated that correcting adjustments would be made to the proper appropriation for the additional $40,000. BLM prepared Instruction Memorandum 2000-148 (IM-148) to provide guidance on correcting the contracts and services charges that were improperly charged to MLAP in fiscal years 1998 and 1999. IM-148 required all offices that had improperly charged contracts and services to MLAP to develop implementation plans to replace the funds and submit those plans to BLM’s Director of Budget. Fourteen administrative states and offices developed and submitted implementation plans to make about $716,000 in correcting adjustments. These correcting adjustments must be made to the appropriations that were properly available when the obligations were incurred and BLM’s records adjusted accordingly to charge that appropriation. BLM officials have told us that they are identifying the appropriations for fiscal years 1998 and 1999 that should have been charged for the costs of these contracts and services and that there are sufficient funds to make the correcting adjustments of about $716,000. According to BLM’s Reports on Budget Execution filed with the Office of Management and Budget for the fourth quarters of fiscal years 1998 and 1999, MLR unobligated balances were $54 million and $32 million, respectively. On the basis of our review, we determined that BLM offices have complied with all of the requirements in IM-148. While the memorandum stated that all funds, not just MLAP funds, should be expended appropriately and costs properly recorded in BLM’s financial management systems, it did not establish any additional procedures on how to implement this policy to prevent future improper charging of MLAP funds. Therefore, until additional procedures specifically for MLAP are established and implemented, BLM has little assurance that improper charging of MLAP funds will not recur in the future. Finally, as requested, in our survey we asked BLM employees whether they were aware of the source of funding for MLAP. Approximately 47.7 percent of BLM employees stated that they were not aware of the source of funding for MLAP programs. In addition, of the 52.3 percent of employees who stated that they were aware of the source for this funding, about 42.5 percent did not know that the funding was based in part on mining fees and designated for MLAP operations. In total, an estimated 69.9 percent of BLM employees were either not aware of the source of MLAP funding or did not know that the program is partially funded by fees collected from miners that are legally available only for MLAP operations. The Congress and program managers need accurate cost information in order to make informed program and budgeting decisions. BLM’s Fund Coding Handbook recognizes that accurate records of costs and accomplishments are critical for planning and decision-making. However, the results of our work at BLM show that BLM’s financial records have not accurately reflected the true costs of its programs because the costs of some labor and a number of contracts and services costs were not charged to the appropriate program. Other subactivities have benefited from the charging of these improper costs to MLAP. Correspondingly, fewer funds have been available for actual MLAP operations. BLM has taken steps to make correcting adjustments for certain of these improper charges, including the development of an Instruction Memorandum. However, the memorandum dealt only with improper charges occurring in fiscal years 1998 and 1999 and did not establish specific guidance or procedures to prevent improper charging of MLAP funds from recurring in the future. Therefore, until additional procedures for MLAP are developed and implemented, the Congress and program managers can place only limited reliance on the accuracy of MLAP cost information. We recommend that the Director of the Bureau of Land Management take the following four actions: make correcting adjustments for improper charges to appropriation remind employees that time charges and other obligations are to be made to the benefiting subactivity as stated in BLM’s Fund Coding Handbook and develop a mechanism to test compliance; provide detailed guidance clarifying which tasks are chargeable to MLAP operations, such as those listed in the background section of this report; and conduct training on this guidance for all employees authorized to charge MLAP. In commenting on a draft of this report, BLM concurred with the findings and recommendations contained in our report. BLM’s response indicated a number of actions it planned to take to address the respective recommendations. Additionally, BLM indicated it would endeavor to improve monitoring guidance and training to ensure the accuracy of costs associated with MLAP. BLM’s comments have been incorporated as appropriate and are reprinted as appendix II. We considered but did not reprint the attachment referred to in BLM’s comment letter. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies to the Ranking Members, Senate Committee on Energy and Natural Resources and its Subcommittee on Forests and Public Land Management; the Ranking Minority Members, House Committee on Resources and its Subcommittee on Energy and Mineral Resources; Honorable Gale Norton, the Secretary of the Interior; and Nina Hatfield, the Acting Director of the Bureau of Land Management. Copies will also be made available to others upon request. If you or your staff have any questions concerning this report, please contact me at (202) 512-9508 or Mark Connelly, Assistant Director, a (202) 512-8795. Key contributors to this assignment are listed in appendix III. Our review examined labor charged to the Mining Law Administration Program (MLAP) by the Department of the Interior’s Bureau of Land Management (BLM) during the first 10 months of fiscal year 2000. We also examined BLM’s methodology for identifying contracts and services that were improperly charged to MLAP during fiscal years 1998 and 1999 and evaluated the processes and procedures developed to correct those improper charges. Finally, we determined whether BLM employees were aware of the source of MLAP funding. To accomplish our first objective, we obtained the records for MLAP’s fiscal year 2000 collections and labor obligations from BLM’s accounting and payroll systems. During fiscal year 2000, BLM collected over $22.7 million in MLAP fees, with BLM’s Nevada State Office collecting approximately $11.1 million, or approximately 48.9 percent of the total collections. In the same year, MLAP reported obligations totaling approximately $32.6 million. As agreed, our review focused on BLM employees charging time to or receiving pay from the Mining Law Administration Program in Alaska, California, Colorado, Eastern States, Nevada, New Mexico, Oregon, Utah, and Washington Office for the period October 1, 1999, through July 15, 2000. The nine administrative states and offices reported MLAP obligations of over $23.4 million, representing approximately 72 percent of total MLAP obligations. BLM classified these obligations as either labor or operational in purpose. Labor obligations, including leave surcharge, for the nine administrative states and offices totaled over $17.4 million, over 74 percent of the total obligations for the nine administrative states and offices in fiscal year 2000. In order to evaluate labor charges to MLAP by BLM during fiscal year 2000, we conducted a statistically representative survey of BLM employees who had charged MLAP during this 10-month period. The survey included questions regarding employees’ time keeping and reporting practices during the survey period, the tasks they worked on, and the subactivities charged for their work. We specifically asked employees in our survey whether they were aware of the source of MLAP funding. Estimates included in this report are representative of the study population for this 10-month period. The study population consisted of 744 BLM employees for whom work was charged to MLAP appropriation account during the period of October 1, 1999, through July 15, 2000, in the nine administrative states and offices listed above. Excluded from the study population were those employees who did not charge work to MLAP, even if they had other charges to the funds appropriated for MLAP operations. The sample design for this study is a single-stage stratified sample of the employees in the study population. The nine administrative states and offices are the strata for our study population. The sample of 125 employees was selected from the strata in proportion to the study population in each stratum. We obtained 116 useable responses from this sample. The population, sample allocation, and sample disposition are summarized in the following table. The survey questionnaire was pretested twice and then distributed to survey participants in advance of the telephone interviews. Interviewers used a computer-assisted data entry program to conduct the telephone interviews and to input the sample data into a database. This was a telephone survey, with hard copy of the questionnaire made available to the respondent a couple days prior to the telephone interview. Data were collected between October 25 and November 20, 2000. Follow-up interviews were performed between November 21 and December 21, 2000. We received useable responses from 116 sampled employees for an overall response rate of 92.8 percent. The nonrespondents consisted of nine individuals who were no longer BLM employees and could not be located, had retired, or were on extended sick leave. Response rates by strata are summarized in table 3. After weighting survey responses to account for selection probabilities and nonresponses, estimates were produced for various characteristics of the study population. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95-percent confidence interval (for example plus or minus 9 percentage points). This is the interval that would contain the actual population value for 95 percent of samples we could have drawn. As a result, we are 95-percent confident that each of the confidence intervals in this report will include the true values in the study population. In this report, all percentage estimates from MLAP survey have sampling errors of plus or minus 9 percentage points or less, unless otherwise noted. In addition to the reported sampling errors, the practical difficulties of conducting any survey may introduce other types of “nonsampling” errors. For example, differences in how a particular question is interpreted, the sources of information that are available to respondents, or the types of employees who do not respond can all introduce unwanted variability into the survey results. Although we did not verify respondents’ answers, we did include steps in the questionnaire design, data collection, data entry, and analysis processes to minimize nonsampling errors. Specifically, we modified our questions based on pretests to make them more understandable and easier to answer. We made repeated attempts to contact sample employees to encourage a high level of response that would reduce any potential nonresponse bias. As data were keyed, they were automatically checked for internal consistency and for out-of-range values. An additional review of survey estimates revealed internally inconsistent data between two questions for eight survey respondents. We made follow-up phone calls to these respondents and reconciled the noted inconsistencies. Another potential source of nonsampling error in this survey would be the reporting of information for the wrong time period. Estimates from this survey are only applicable to the time period from October 1, 1999, to July 15, 2000. In order to reduce the possibility that respondents might report on labor performed after the end of the study period, language was included with many of the survey questions to remind the respondent that they should only respond about labor performed during the study period. Estimates from this survey are only applicable to the time period from October 1, 1999, to July 15, 2000, and cannot accurately represent labor performed over the entire fiscal year. For example, if employees tended to charge more or less time than worked to certain programs during the last 2 months of the fiscal year, that labor would not be reflected in survey estimates. Our study population was limited to those BLM employees who charged work to MLAP during the study period. Consequently, the estimates from the MLAP survey would not reflect any labor by employees who worked on MLAP during the study period but did not charge the program. In addition to our statistically representative telephone survey, we conducted telephone interviews with judgmentally selected BLM employees who had received awards or bonuses from MLAP, but had not charged any time to the program. To accomplish our second objective, we reviewed the documentation used in BLM’s review of the contracts and services over $1,500 charged to MLAP for fiscal years 1998 and 1999, a total of 491 contracts. These 491 contracts represented over $8.0 million, or 89.3 percent, of the contracts and services obligated to MLAP during this time period. We conducted walkthroughs of the procedures BLM performed in that review and evaluated BLM’s criteria for determining appropriate use of MLAP funds. We also interviewed field office personnel regarding internal controls associated with the requisition and approval of contracts and services. In order to review BLM’s procedures for making correcting adjustments for improper charges to MLAP, we obtained and reviewed BLM’s Instruction Memorandum No. 2000-148 (IM-148), which required all offices that had improperly charged contracts and services to MLAP to develop implementation plans for the replacement of the miscoded MLAP funds and submit those plans to BLM’s Director of Budget. We then interviewed BLM officials responsible for coordinating the implementation of IM-148, reviewed the procedures taken to comply with IM-148’s requirements, and verified that the procedures had been followed by reviewing the pertinent documentation, including implementation plans, reports of budgetary transactions, and budget control printouts. We compared the contracts and services listed in the transactions reports prepared by BLM’s Man’agement Information System with the lists of contracts and services submitted by BLM’s administrative states and offices to test the completeness of BLM’s sample. Finally, we reviewed the 491 contract amounts using the documentation provided to BLM. We examined the amounts cited, the services provided, and the justifications given in order to verify that the charges to MLAP were appropriate. We did not independently verify the reliability of the accounting data provided nor did we trace the data to the systems from which they came. We conducted our work from August 2000 through December 2000 in accordance with generally accepted government auditing standards. Mark P. Connelly, Edda Emmanuelli-Perez, Lisa M. Knight, W. Stephen Lowrey, Miguel A. Lujan, Mark F. Ramage, Shannah B. Wallace, and McCoy Williams made key contributions to this report. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
The Bureau of Land Management's (BLM) Mining Law Administration Program (MLAP) is responsible for managing the environmentally responsible exploration and development of locatable minerals on public lands. The program is funded through mining fees collected from the holders of unpatented mining claims and sites and by appropriations to the extent that fees are inadequate to fund the program. Congress and program managers need accurate cost information in order to make informed program and budgeting decisions. However, GAO found that BLM's financial records did not accurately reflect the true costs of its programs because the costs of labor and a number of contracts and services costs were charged to MLAP and not to the appropriate program. As a result, other subactivities benefited from the charging of these improper costs and fewer funds have been available for actual MLAP operations. BLM has taken steps to make correcting adjustments for improper charges to MLAP contracts and services; however, additional adjustments are needed to correct for labor costs that were improperly charged to MLAP. Until these adjustments for improperly charged labor are made, Congress and program managers can place only a limited reliance on the accuracy of MLAP cost information.
In my role as lead partner on the audit of the U.S. government’s consolidated financial statements and the de facto Chief Accountability Officer of the United States government, I have become increasingly concerned about the state of our nation’s finances. In speeches and presentations over the past several years, I have called attention to our large and growing long-term fiscal challenge and the risks it poses to our nation’s future. Simply put, our nation’s fiscal policy is on an unsustainable course, and our long-term fiscal imbalance worsened significantly in 2004. GAO’s simulations—as well as those of the Congressional Budget Office (CBO) and others—show that over the long term we face a large and growing structural deficit due primarily to known demographic trends and rising health care costs. Continuing on this unsustainable fiscal path will gradually erode, if not suddenly damage, our economy, our standard of living, and ultimately our national security. Our current path also will increasingly constrain our ability to address emerging and unexpected budgetary needs. Regardless of the assumptions used, all simulations indicate that the problem is too big to be solved by economic growth alone or by making modest changes to existing spending and tax policies. Nothing less than a fundamental reexamination of all major spending and tax policies and priorities is needed. This reexamination should also involve a national discussion about what Americans want from their government and how much they are willing to pay for those things. This discussion will not be easy, but it must take place. In fiscal year 2004 alone, the nation’s fiscal imbalance grew dramatically, primarily due to enactment of the new Medicare prescription drug benefit, which added $8.1 trillion to the outstanding commitments and obligations of the U.S. government. The near-term deficits also reflected higher defense, homeland security, and overall discretionary spending which exceeded growth in the economy, as well as revenues which have fallen below historical averages due to policy decisions and other economic and technical factors. While the nation’s long-term fiscal imbalance grew significantly, the retirement of the baby boom generation has come closer to becoming a reality. In fact, the cost implications of the baby boom generation’s retirement have already become a factor in CBO’s baseline projections and will only intensify as the boomers age. According to CBO, total federal spending for Social Security, Medicare, and Medicaid is projected to grow by about 25 percent over the next 10 years—from 8.4 percent of gross domestic product (GDP) in 2004 to 10.4 percent in 2015. Given these and other factors, it is clear that the nation’s current fiscal path is unsustainable and that tough choices will be necessary in order to address the growing imbalance. There are different ways to describe the magnitude of Social Security’s long-term financing challenge, but they all show a need for program reform sooner rather than later. A case can be made for a range of different measures, as well as different time horizons. For instance, the shortfall can be measured in present value, as a percentage of GDP, or as a percentage of taxable payroll. The Social Security Administration (SSA) has made projections of the Social Security shortfall using different time horizons. (See table 1.) While estimates vary due to different horizons, both identify the same long- term challenge: The Social Security system is unsustainable in the long run. Taking action soon on Social Security would not only make the necessary action less dramatic than if we wait but would also promote increased budgetary flexibility in the future and stronger economic growth. Although the Trustees’ 2004 intermediate estimates project that the combined Social Security Trust Funds will be solvent until 2042, within the next few years, Social Security spending will begin to put pressure on the rest of the federal budget. (See table 2.) Under the Trustees’ 2004 intermediate estimates, Social Security’s cash surplus—the difference between program tax income and the costs of paying scheduled benefits— will begin a permanent decline in 2008. (See fig. 1.) To finance the same level of federal spending as in the previous year, additional revenues and/or increased borrowing will be needed in each subsequent year. By 2018, Social Security’s cash income (tax revenue) is projected to fall below program expenses. At that time, Social Security will join Medicare’s Hospital Insurance Trust Fund, whose outlays exceeded cash revenues in 2004, as a net claimant on the rest of the federal budget. The combined OASDI Trust Funds will begin drawing on the Treasury to cover the cash shortfall. At this point, Treasury will need to obtain cash for those redeemed securities either through increased taxes, and/or spending cuts, and/or more borrowing from the public than would have been the case had Social Security’s cash flow remained positive. Today Social Security spending exceeds federal spending for Medicare and Medicaid, but that will change. While Social Security is expected to grow about 5.6 percent per year on average over the next 10 years, Medicare and Medicaid combined are expected to grow at 8.5 percent per year. As a result, CBO’s baseline projects Medicare and Medicaid spending will be about 30 percent higher than Social Security in 2015. According to the Social Security and Medicare trustees, Social Security will grow from 4.3 percent of GDP today to 6.6 percent in 2075, and Medicare’s burden on the economy will quintuple—from 2.7 percent to 13.3 percent of the economy. GAO’s long-term simulations illustrate the magnitude of the fiscal challenges associated with an aging society and the significance of the related challenges the government will be called upon to address. Figures 2 and 3 present these simulations under two different sets of assumptions. In figure 2, we begin with CBO’s January baseline, constructed according to the statutory requirements for that baseline. Consistent with these requirements, discretionary spending is assumed to grow with inflation for the first 10 years and tax cuts scheduled to expire are assumed to expire. After 2015, discretionary spending is assumed to grow with the economy, and revenue is held constant as a share of GDP at the 2015 level. In figure 3 two assumptions are changed: discretionary spending is assumed to grow with the economy after 2005 rather than merely with inflation and the tax cuts are extended. For both simulations Social Security and Medicare spending is based on the 2004 Trustees’ intermediate projections, and we assume that benefits continue to be paid in full after the trust funds are exhausted. Medicaid spending is based on CBO’s December 2003 long- term projections under mid-range assumptions. Both these simulations illustrate that, absent policy changes, the growth in spending on federal retirement and health entitlements will encumber an escalating share of the government’s resources. Indeed, when we assume that recent tax reductions are made permanent and discretionary spending keeps pace with the economy, our long-term simulations suggest that by 2040 federal revenues may be adequate to pay little more than interest on the federal debt. Neither slowing the growth in discretionary spending nor allowing the tax provisions to expire—nor both together—would eliminate the imbalance. Although revenues will be part of the debate about our fiscal future, the failure to reform Social Security, Medicare, Medicaid, and other drivers of the long-term fiscal gap would require at least a doubling of taxes—and that seems implausible. Accordingly, substantive reform of Social Security and our major health programs remains critical to recapturing our future fiscal flexibility. Although considerable uncertainty surrounds long-term budget projections, we know two things for certain: the population is aging and the baby boom generation is approaching retirement age. The aging population and rising health care spending will have significant implications not only for the budget but also for the economy as a whole. Figure 4 shows the total future draw on the economy represented by Social Security, Medicare, and Medicaid. Under the 2004 Trustees’ intermediate estimates and CBO’s long-term Medicaid estimates, spending for these entitlement programs combined will grow to 15.6 percent of GDP in 2030 from today’s 8.5 percent. It is clear that, taken together, Social Security, Medicare, and Medicaid represent an unsustainable burden on future generations. The government can help ease future fiscal burdens through spending reductions or revenue actions that reduce debt held by the public, thereby saving for the future and enhancing the pool of economic resources available for private investment and long-term growth. Economic growth can help, but given the size of our projected fiscal gap we will not be able to simply grow our way out of the problem. Closing the current long-term fiscal gap would require sustained economic growth far beyond that experienced in U.S. economic history since World War II. Tough choices are inevitable, and the sooner we act the better. Some of the benefits of early action—and the costs of delay—can be illustrated using the 2004 Social Security Trustees’ intermediate projections. Figure 5 compares what it would take to keep Social Security solvent through 2078 by either raising payroll taxes or reducing benefits. If we did nothing until 2042—the year SSA estimates the Trust Funds will be exhausted—achieving actuarial balance would require changes in benefits of 30 percent or changes in taxes of 43 percent. As figure 5 shows, earlier action shrinks the size of the necessary adjustment. Both sustainability concerns and solvency considerations drive us to act sooner rather than later. Trust Fund exhaustion may be nearly 40 years away, but the squeeze on the federal budget will begin as the baby boom generation begins to retire. Actions taken today can ease both these pressures and the pain of future actions. Acting sooner rather than later also provides a more reasonable planning horizon for future retirees. The Social Security program’s situation is but one symptom of larger demographic trends that will have broad and profound effects on our nation’s future in other ways as well. As you are aware, Social Security has always been a largely pay-as-you-go system. This means that the system’s financial condition is directly affected by the relative size of the populations of covered workers and beneficiaries. Historically, this relationship has been favorable to the system’s financial condition. Now, however, people are living longer and spending more time in retirement. As shown in figure 6, the U.S. elderly dependency ratio is expected to continue to increase.The proportion of the elderly population relative to the working-age population in the U.S. rose from 13 percent in 1950 to 19 percent in 2000. By 2050, there is projected to be almost 1 elderly dependent for every 3 people of working age—a ratio of 32 percent. Additionally, the average life expectancy of males at birth has increased from 66.6 in 1960 to 74.3 in 2000, with females at birth experiencing a rise from 73.1 to 79.7 over the same period. As general life expectancy has increased in the United States, there has also been an increase in the number of years spent in retirement. Improvements in life expectancy have extended the average amount of time spent by workers in retirement from 11.5 years in 1950 to 18 years for the average male worker as of 2003. A falling fertility rate is the other principal factor underlying the growth in the elderly’s share of the population. In the 1960s, the fertility rate, which is the average number of children that would be born to women during their childbearing years, was an average of 3 children per woman. Today it is a little over 2, and by 2030 it is expected to fall to 1.95—a rate that is below what it takes to maintain a stable population. Taken together, these trends threaten the financial solvency and sustainability of Social Security. The combination of these factors means that annual labor force growth will begin to slow after 2010 and by 2025 is expected to be less than a fifth of what it is today. (See fig. 7.) Relatively fewer workers will be available to produce the goods and services that all will consume. Without a major increase in productivity or increases in immigration, low labor force growth will lead to slower growth in the economy and to slower growth of federal revenues. This in turn will only accentuate the overall pressure on the federal budget. The aging of the labor force and the reduced growth in the number of workers will have important implications for the size and composition of the labor force, as well as the characteristics of many jobs, throughout the 21st century. The U.S. workforce of the 21st century will be facing a very different set of opportunities and challenges than that of previous generations. Increased investment could increase the productivity of workers and spur economic growth. However, increasing investment depends on national saving, which remains at historically low levels. Historically, the most direct way for the federal government to increase saving has been to reduce the deficit (or run a surplus). Although the government may try to increase personal saving, results of these efforts have been mixed. For example, even with the preferential tax treatment granted since the 1970s to encourage retirement saving, the personal saving rate has steadily declined. Even if economic growth increases, the structure of retirement programs and historical experience in health care cost growth suggest that higher economic growth results in a generally commensurate growth in spending for these programs in the long term. In recent years, personal saving by households has reached record lows while at the same time the federal budget deficit has climbed. (See fig. 8.) Accordingly, national saving has diminished but the economy has continued to grow in part because more and better investments were made. That is, each dollar saved bought more investment goods and a greater share of saving was invested in highly productive information technology. The economy has also continued to grow because the United States was able to invest more than it saved by borrowing abroad, that is, by running a current account deficit. However, a portion of the income generated by foreign-owned assets in the United States must be paid to foreign lenders. National saving is the only way a country can have its capital and own it too. Initial Social Security benefits are indexed to nominal wage growth resulting in higher benefits over time. consequences for the living standards of future generations. The financial burdens facing the smaller cohort of future workers in an aging society would most certainly be lessened if the economic pie were enlarged. This is no easy challenge, but in a very real sense, our fiscal decisions affect the longer-term economy through their effects on national saving. The persistent U.S. current account deficits of recent years have translated into a rising level of indebtedness to other countries. However, many other nations currently financing investment in the United States also will face aging populations and declining national saving, so relying on foreign savings to finance a large share of U.S. domestic investment or federal borrowing is not a viable strategy in the long run. As figure 4 showed, over the long term Medicare and Medicaid will dominate the federal government’s future fiscal outlook. Medicare growth rates reflect not only a burgeoning beneficiary population but also the escalation of health care costs at rates well exceeding general rates of inflation. Health care generally presents not only a much greater but a more complex challenge than Social Security. The structural changes needed to address health care cost growth will take time to develop, and the process of reforming health care is likely to be an incremental one. While the long-term fiscal challenge cannot be successfully addressed without addressing Medicare and Medicaid, federal health spending trends should not be viewed in isolation from the health care system as a whole. For example, Medicare and Medicaid cannot grow over the long term at a slower rate than cost in the rest of the health care system without resulting in a two-tier health care system. This, for example, could squeeze providers who then in turn might seek to recoup costs from other payers elsewhere in the health care system. Rather, in order to address the long­ term fiscal challenge, it will be necessary to find approaches that deal with health care cost growth in the overall health care system. Although health care spending is the largest driver of the long-term fiscal outlook, this does not mean that Social Security reform should be postponed until after health is addressed. On the contrary, it argues for moving ahead on Social Security now. The outlines of Social Security reform have already been articulated in many Social Security reform proposals. These approaches and the specific elements of reform are well known and have been the subject of many analyses, including GAO reports and testimonies. Reform approaches already put forward can serve as a starting point for deliberations. As important as financial stability may be for Social Security, it cannot be the only consideration. As a former public trustee of Social Security and Medicare, I am well aware of the central role these programs play in the lives of millions of Americans. Social Security remains the foundation of the nation’s retirement system. It is also much more than just a retirement program; it pays benefits to disabled workers and their dependents, spouses and children of retired workers, and survivors of deceased workers. In 2004, Social Security paid almost $493 billion in benefits to more than 47 million people. Since its inception, the program has successfully reduced poverty among the elderly. In 1959, 35 percent of the elderly were poor. In 2000, about 8 percent of beneficiaries aged 65 or older were poor, and 48 percent would have been poor without Social Security. It is precisely because the program is so deeply woven into the fabric of our nation that any proposed reform must consider the program in its entirety, rather than one aspect alone. To assist policymakers, GAO has developed a broad framework for evaluating reform proposals that considers not only solvency but other aspects of the program as well. Our criteria aim to balance financial and economic considerations with benefit adequacy and equity issues and the administrative challenges associated with various proposals. The analytic framework GAO has developed to assess proposals comprises three basic criteria: Financing Sustainable Solvency—the extent to which a proposal achieves sustainable solvency and how it would affect the economy and the federal budget. Our sustainable solvency standard encompasses several different ways of looking at the Social Security program’s financing needs. While a 75-year actuarial balance has generally been used in evaluating the long-term financial outlook of the Social Security program and reform proposals, it is not sufficient in gauging the program’s solvency after the 75th year. For example, under the trustees’ intermediate assumptions, each year the 75-year actuarial period changes, and a year with a surplus is replaced by a new 75th year that has a significant deficit. As a result, changes made to restore trust fund solvency only for the 75-year period can result in future actuarial imbalances almost immediately. Reform plans that lead to sustainable solvency would be those that consider the broader issues of fiscal sustainability and affordability over the long term. Specifically, a standard of sustainable solvency also involves looking at (1) the balance between program income and costs beyond the 75th year and (2) the share of the budget and economy consumed by Social Security spending. Balancing Adequacy and Equity—the relative balance struck between the goals of individual equity and income adequacy. The current Social Security system’s benefit structure attempts to strike a balance between these two goals. From the beginning, Social Security benefits were set in a way that focused especially on replacing some portion of workers’ preretirement earnings. Over time other changes were made that were intended to enhance the program’s role in helping ensure adequate incomes. Retirement income adequacy, therefore, is addressed in part through the program’s progressive benefit structure, providing proportionately larger benefits to lower earners and certain household types, such as those with dependents. Individual equity refers to the relationship between contributions made and benefits received. This can be thought of as the rate of return on individual contributions. Balancing these seemingly conflicting objectives through the political process has resulted in the design of the current Social Security program and should still be taken into account in any proposed reforms. Implementing and Administering Proposed Reforms—how readily a proposal could be implemented, administered, and explained to the public. Program complexity makes implementation and administration both more difficult and harder to explain. Some degree of implementation and administrative complexity arises in virtually all proposed changes to Social Security, even those that make incremental changes in the already existing structure. Although these issues may appear technical or routine on the surface, they are important issues because they have the potential to delay—if not derail—reform if they are not considered early enough for planning purposes. Moreover, issues such as feasibility and cost can, and should, influence policy choices. Continued public acceptance of and confidence in the Social Security program require that any reforms and their implications for benefits be well understood. This means that the American people must understand why change is necessary, what the reforms are, why they are needed, how they are to be implemented and administered, and how they will affect their own retirement income. All reform proposals will require some additional outreach to the public so that future beneficiaries can adjust their retirement planning accordingly. The more transparent the implementation and administration of reform, and the more carefully such reform is phased in, the more likely it will be understood and accepted by the American people. The weight that different policymakers place on different criteria will vary, depending on how they value different attributes. For example, if offering individual choice and control is less important than maintaining replacement rates for low-income workers, then a reform proposal emphasizing adequacy considerations might be preferred. As they fashion a comprehensive proposal, however, policymakers will ultimately have to balance the relative importance they place on each of these criteria. As we have noted in the past before this committee and elsewhere, a comprehensive evaluation is needed that considers a range of effects together. Focusing on comprehensive packages of reforms will enable us to foster credibility and acceptance. This will help us avoid getting mired in the details and losing sight of important interactive effects. It will help build the bridges necessary to achieve consensus. A variety of proposals have been offered to address Social Security’s financial problems. Many proposals contain reforms that would alter benefits or revenues within the structure of the current defined benefits system. Some would reduce benefits by modifying the benefit formula (such as increasing the number of years used to calculate benefits or using price indexing instead of wage indexing), reduce cost-of-living adjustments (COLA), raise the normal and/or early retirement ages, or revise dependent benefits. Some of the proposals also include measures or benefit changes that seek to strengthen progressivity (e.g., replacement rates) in an effort to mitigate the effect on low-income workers. Others have proposed revenue increases, including raising the payroll tax or expanding the Social Security taxable wage base that finances the system; increasing the taxation of benefits; or covering those few remaining workers not currently required to participate in Social Security, such as older state and local government employees. A number of proposals also seek to restructure the program through the creation of individual accounts. Under a system of individual accounts, workers would manage a portion of their own Social Security contributions to varying degrees. This would expose workers to a greater degree of risk in return for both greater individual choice in retirement investments and the possibility of a higher rate of return on contributions than available under current law. There are many different ways that an individual account system could be set up. For example, contributions to individual accounts could be mandatory or they could be voluntary. Proposals also differ in the manner in which accounts would be financed, the extent of choice and flexibility concerning investment options, the way in which benefits are paid out, and the way the accounts would interact with the existing Social Security program—individual accounts could serve either as an addition to or as a replacement for part of the current benefit structure. In addition, the timing and impact of individual accounts on the solvency, sustainability, adequacy, equity, net savings, and rate of return associated with the Social Security system varies depending on the structure of the total reform package. Individual accounts by themselves will not lead the system to sustainable solvency. Achieving sustainable solvency requires more revenue, lower benefits, or both. Furthermore, incorporating a system of individual accounts may involve significant transition costs. These costs come about because the Social Security system would have to continue paying out benefits to current and near-term retirees concurrently with establishing new individual accounts. Individual accounts can contribute to sustainability as they could provide a mechanism to prefund retirement benefits that would be immune to demographic booms and busts. However, if such accounts are funded through borrowing, no such prefunding is achieved. An additional important consideration in adopting a reform package that contains individual accounts would be the level of benefit adequacy achieved by the reform. To the extent that benefits are not adequate, it may result in the government eventually providing additional revenues to make up the difference. Also, some degree of implementation and administrative complexity arises in virtually all proposed changes to Social Security. The greatest potential implementation and administrative challenges are associated with proposals that would create individual accounts. These include, for example, issues concerning the management of the information and money flow needed to maintain such a system, the degree of choice and flexibility individuals would have over investment options and access to their accounts, investment education and transitional efforts, and the mechanisms that would be used to pay out benefits upon retirement. The federal Thrift Savings Plan (TSP) could serve as a model for providing a limited amount of options that reduce risk and administrative costs while still providing some degree of choice. However, a system of accounts that spans the entire national workforce and millions of employers would be significantly larger and more complex than TSP or any other system we have in place today. Another important consideration for Social Security reform is assessing a proposal’s effect on national saving. Individual account proposals that fund accounts through redirection of payroll taxes or general revenue do not increase national saving on a first order basis. The redirection of payroll taxes or general revenue reduces government saving by the same amount that the individual accounts increase private saving. Beyond these first order effects, the actual net effect of a proposal on national saving is difficult to estimate due to uncertainties in predicting changes in future spending and revenue policies of the government as well as changes in the saving behavior of private households and individuals. For example, the lower surpluses and higher deficits that result from redirecting payroll taxes to individual accounts could lead to changes in federal fiscal policy that would increase national saving. On the other hand, households may respond by reducing their other saving in response to the creation of individual accounts. No expert consensus exists on how Social Security reform proposals would affect the saving behavior of private households and businesses. Finally, the effort to reform Social Security is occurring as our nation’s private pension system is also facing serious challenges. Only about half of the private sector workforce is covered by a pension plan. A number of large underfunded traditional defined benefit plans—plans where the employer bears the risk of investment—have been terminated by bankrupt firms, including household names like Bethlehem Steel, US Airways, and Polaroid. These terminations have resulted in thousands of workers losing promised benefits and have saddled the Pension Benefit Guaranty Corporation, the government corporation that partially insures certain defined benefit pension benefits, with billions of dollars in liabilities that threaten its long-term solvency. Meanwhile, the number of traditional defined benefit pension plans continues to decline as employers increasingly offer workers defined contribution plans like 401(k) plans where, like individual accounts, workers face the potential of both greater return and greater risk. These challenges serve to reinforce the imperative to place Social Security on a sound financial footing which provides a foundation of certain and secure retirement income. Regardless of what type of Social Security reform package is adopted, continued confidence in the Social Security program is essential. This means that the American people must understand why change is necessary, what the reforms are, why they are needed, how they are to be implemented and administered, and how they will affect their own retirement income. All reform proposals will require some additional outreach to the public so that future beneficiaries can adjust their retirement planning accordingly. The more transparent the implementation and administration of reform, and the more carefully such reform is phased in, the more likely it will be understood and accepted by the American people. Social Security does not face an immediate crisis but it does face a large and growing financial problem. In addition, our Social Security challenge is only part of a much broader fiscal challenge that includes, among other things, the need to reform Medicare, Medicaid, and our overall health care system. Today we have an opportunity to address Social Security as a first step toward improving the nation’s long-term fiscal outlook. Steps to reform our federal health care system are likely to be much more difficult. They are also likely to require a series of incremental actions over an extended period of time. As I have said before, the future sustainability of programs is the key issue policy makers should address—i.e., the capacity of the economy and budget to afford the commitment over time. Absent substantive reform, these important federal programs will not be sustainable. Furthermore, absent reform, younger workers will face dramatic benefit reductions or tax increases that will grow over time. Many retirees and near retirees fear cuts that would affect them in the immediate future while young people believe they will get little or no Social Security benefits in the longer term. I believe that it is possible to reform Social Security in a way that will ensure the program’s solvency, sustainability, and security while exceeding the expectations of all generations of Americans. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Social Security is the foundation of the nation's retirement income system, helping to protect the vast majority of American workers and their families from poverty in old age. However, it is much more than a retirement program, providing millions of Americans with disability insurance and survivors' benefits. As the baby boom generation retires and given longer life spans and lower birth rates, Social Security's financing shortfall will grow. The current gap between promised and funded benefits is $3.7 trillion and is growing daily. The Chairman of the House Budget Committee asked GAO to discuss the need for Social Security reform. This testimony addresses the nature of Social Security's long-term financing problem and why it is preferable for Congress to take action sooner rather than later. This testimony also notes the broader context in which reform proposals should be considered and the criteria that GAO has recommended as a basis for analyzing any Social Security reform proposals. Although the Social Security system is not in crisis today, it faces a serious and growing solvency and sustainability challenge that is growing as time passes. If we did nothing until 2042, achieving actuarial balance would require a 30-percent reduction in benefits or a 43-percent increase in payroll taxes. Furthermore, Social Security's problems are a subset of our nation's overall fiscal challenge. Absent reform, the nation will ultimately have to choose among escalating federal deficits and debt, huge tax increases and/or dramatic budget cuts. As GAO's long-term budget simulations show, substantive reform of Social Security and our major federal health programs (e.g., Medicare and Medicaid) is critical to saving our fiscal future. Taking action soon would also serve to reduce the amount of change needed to ensure that Social Security is solvent, sustainable, and secure for current and future generations. Acting sooner would also serve to improve the federal government's credibility with the markets and the confidence of the American people in the government's ability to address long-range challenges before they reach crisis proportions. However, financial stability should not be the only consideration when evaluating reform proposals. Other important objectives, such as balancing the adequacy and equity of the benefits structure and various administrative and operational issues need to be considered. Furthermore, any changes to Social Security should be considered in the context of the broader challenges facing our nation, such as the changing nature of the private pension system, escalating health care costs, and the need to reform Medicare and Medicaid.
According to the Navy’s fiscal year 2003 budget, the Navy Working Capital Fund will earn about $20.8 billion in revenue during fiscal year 2003. The Navy Working Capital Fund consists of the following six major activity groups: depot maintenance, transportation, base support, information services, supply management, and research and development. The Navy estimates that the research and development activity group will earn about $7.7 billion during fiscal year 2003, the largest activity group in terms of the dollar amount of revenue earned. This activity group includes the following subactivity groups: (1) the Naval Surface Warfare Center, (2) the Naval Air Warfare Center, (3) the Naval Undersea Warfare Center, (4) the Naval Research Laboratory, and (5) the Space and Naval Warfare Systems Centers. The SPAWAR systems centers are the Navy’s full-spectrum research, development, test and evaluation, engineering, and fleet support centers for command, control, and communication systems and ocean surveillance and the integration of those systems. The systems centers (1) support the fleet in mission and capability by providing capable and ready command and control systems for the Navy and (2) provide the innovative scientific and technical expertise and facilities necessary to ensure that the Navy can develop, acquire, and maintain the warfare systems needed to meet requirements. The SPAWAR systems centers’ primary locations are in San Diego, California and Charleston, South Carolina. As part of the Navy Working Capital Fund, the SPAWAR systems centers rely on sales revenue rather than direct congressional appropriations to finance their operations. DOD policy requires working capital fund activity groups to (1) establish prices that allow them to recover their expected costs from their customers and (2) operate on a break-even basis over time—that is, not make a profit nor incur a loss. DOD policy also requires the activity groups to establish their sales prices prior to the start of each fiscal year and to apply these predetermined or “stabilized” prices to most orders received from customers during the year—regardless of when the work is actually accomplished or what costs are actually incurred. Customers use appropriated funds to finance the orders placed with the SPAWAR systems centers. When a systems center accepts the customer order, its own obligational authority is increased and the customer’s appropriation is obligated by the amount of the order. The working capital fund activity incurs obligations for costs, such as material and labor, to perform the work. In addition to receiving orders from customers to do work as part of the working capital fund, SPAWAR systems centers also award hundreds of millions of dollars in contracts with the private sector for work to be performed for the centers’ customers. These contracts and related work are not included in the working capital fund from a financial standpoint because the contractors directly bill the customers for work performed and the customers directly pay the contractors. DOD and the Navy refer to this process of awarding contracts for customers as direct cite orders, since the SPAWAR systems centers cite the customers’ appropriation(s) on the contracts. The customers’ funds are obligated when the systems centers award the contracts with contractors. Carryover is the dollar value of work that has been ordered and funded (obligated) by customers but not yet completed by working capital fund activities at the end of the fiscal year. Carryover consists of both the unfinished portion of work started but not yet completed, as well as requested work that has not yet commenced. To manage carryover, DOD converted the dollar amount of carryover to equivalent months of work. This was done to put the magnitude of the carryover in proper perspective. For example, if an activity group performs $100 million of work in a year and had $100 million in carryover at year-end, it would have 12 months of carryover. However, if another activity group performs $400 million of work in a year and had $100 million in carryover at year-end, this group would have 3 months of carryover. The congressional defense committees and DOD have acknowledged that some carryover is necessary at fiscal year-end if working capital funds are to operate efficiently and effectively. In 1996, DOD established a 3-month carryover standard for all the working capital fund activities except for the contract portion of the Air Force depot maintenance activity group. In May 2001, we reported that DOD did not have a basis for its carryover standard and recommended that DOD determine the appropriate carryover standard for the depot maintenance, ordnance, and research and development activity groups. Based on our recommendation, in December 2002, DOD revised its carryover policy for working capital fund activities. Under the revised method, DOD eliminated the 3-month standard, and the allowable amount of carryover is to be based on the overall disbursement rate of the customers’ appropriations financing the work. Too little carryover could result in some activity groups not having work to perform at the beginning of the fiscal year, resulting in the inefficient use of personnel. On the other hand, too much carryover could result in an activity group receiving funds from customers in one fiscal year but not performing the work until well into the next fiscal year or subsequent years. By minimizing the amount of the carryover, DOD can use its resources most effectively and minimize the “banking” of funds for work and programs to be performed in subsequent years. For fiscal years 1998 through 2002, SPAWAR systems centers’ budgeted gross carryover was significantly less than reported actual gross carryover, thereby providing decision makers, including the Office of the Under Secretary of Defense (Comptroller) and congressional defense committees, misleading carryover information. These decision makers use carryover information to determine whether the SPAWAR systems centers have too much carryover. If the systems centers have too much carryover, the decision makers may reduce the customers’ budgets and use these resources for other purposes. For example, during its review of the fiscal year 2003 budget, the Office of the Under Secretary of Defense (Comptroller) noted that the Navy research and development activities carryover had been steadily increasing from about $2.2 billion in fiscal year 1997 to about $3.4 billion in fiscal year 2003. Since a significant portion of the carryover was related to work that was to be contracted out, the Office of the Under Secretary of Defense (Comptroller) reduced the customer funding by $161.1 million because these efforts could be funded in fiscal year 2004 with no impact on performance. SPAWAR systems centers’ reported actual year-end gross carryover was substantially greater than their budgeted gross carryover. Table 1 shows that from fiscal year 1998 through fiscal year 2002 reported actual gross carryover exceeded budgeted gross carryover, and the difference has increased from about $153 million to about $286 million. The Navy’s budget requests consistently underestimated SPAWAR systems centers’ gross carryover, in part, because the Navy consistently underestimated the amount of orders to be received from customers by hundreds of millions of dollars. Table 2 shows that the amount of difference between budgeted and reported actual orders increased from about $352 million (39 percent) in fiscal year 1998 to about $1.1 billion (88 percent) in fiscal year 2002. Since orders received from customers are the major source of funds for SPAWAR and one of the key factors in determining the amount of carryover at fiscal year-end, it is critical that the Navy has accurate budget estimates on the amount of orders to be received from customers. However, for fiscal years 2000, 2001, and 2002 actual orders exceeded budgeted orders by at least 68 percent each year. The data in table 2 indicate that the SPAWAR systems centers’ customers have not accurately estimated the amount of orders they will place with the systems centers. Customers determine and justify their anticipated requirements for goods and services and the levels of performance they require from the systems centers to fulfill mission objectives. Our analysis of budget and accounting reports that provide information on customer orders shows that orders financed with three appropriations made up a large part of the differences in fiscal years 2000, 2001, and 2002. The appropriations used by customers to finance 49 percent to 67 percent of the differences for these 3 fiscal years were the Other Procurement, Navy appropriation; Research, Development, Test, and Evaluation, Defense appropriation; Research, Development, Test, and Evaluation, Navy appropriation. Officials from the Charleston and San Diego Systems Centers and SPAWAR headquarters stated, and our work found, that customers have historically understated their budget estimates on customer orders that are received by the SPAWAR working capital fund. They stated that the systems centers’ budgets for orders are based on what the customers tell them their requirements would be for a particular fiscal year. However, they also told us that customers are hesitant to make a full commitment to the estimated amount of work that will need to be performed. SPAWAR and Navy headquarters budget officials acknowledged that the SPAWAR systems centers’ budgets have consistently understated gross carryover and orders received from customers (claimants). They also stated that the dollar amount of orders that the systems centers receive from customers must match the dollar amount of orders that customers submit in their appropriated fund budgets. Customers only record in their budgets those orders that they will be sending directly to the systems centers. If a customer initially allocates budgeted funds to an activity not related to the working capital fund—which is a third party—and the third party places the order with a SPAWAR systems center, the customer’s budget reflects that these funds went to a third party. This results in the amount of budgeted orders that the systems centers receive from customers being understated. Navy headquarters officials stated that this is not an easy problem to resolve because there are many customers and no one person or office is responsible for fixing the problem and it is hard to pinpoint which customers are not budgeting correctly. Navy headquarters budgeting officials also stated that the fiscal year 2001 and 2002 budgets further understated gross carryover and orders for the following three reasons. First, the Naval Computers and Telecommunications Command merged with SPAWAR, which resulted in about $125 million of additional orders being received in fiscal year 2001 than was reflected in SPAWAR systems centers’ budget. Second, the Navy changed its policy on work performed on certain types of work orders placed with the San Diego Systems Center. As a result, customers placed more orders for work that was contracted out by the working capital fund than was originally budgeted for in fiscal years 2001 and 2002. Third, the SPAWAR systems centers received $166.7 million in orders financed by the Defense Emergency Response Fund in fiscal year 2002 that was not reflected in the SPAWAR systems centers’ budget. These funds were provided via a supplemental appropriation. Navy headquarters officials were aware of this budgeting problem and issued guidance in March 2002 on preparing the fiscal 2004/2005 budget estimates that stressed the importance of customers accurately preparing budget estimates for orders placed with the Navy Working Capital Fund, including the SPAWAR systems centers. The guidance also stated that (1) it was imperative that all funds to be sent to the Navy Working Capital Fund be accurately reflected in the budget and (2) customers have historically underreported the funds to be placed with the Navy Working Capital Fund (particularly with the research and development business area that includes the SPAWAR systems centers) and overreported the use of these funds in other areas. In addition to understating budgeted gross carryover, SPAWAR systems centers also consistently understated their reported actual carryover. Inaccurate carryover information results in the Congress and DOD officials not having the information they need to perform their oversight responsibilities, including reviewing DOD’s budget. Navy reports show that the systems centers’ fiscal year-end carryover balances for fiscal years 1998 through 2002 did not exceed DOD’s 3-month carryover standard. However, we found that the systems centers’ reported carryover balances were understated because (1) DOD’s guidance for calculating the number of months of carryover allowed this to happen and (2) the systems centers used accounting entries to manipulate customer work orders at year-end to help reduce reported carryover below the 3-month standard. Prior to 1996, if working capital fund activity groups’ budgets projected more than a 3-month level of carryover, their customers’ budgets could be, and sometimes were, reduced by the Office of the Under Secretary of Defense (Comptroller) and/or congressional committees. Because of the military services’ concerns about (1) the methodology used to compute the months of carryover and (2) the reductions that were being made to customer budgets because of excess carryover, Defense performed a joint review of carryover in 1996 to determine if the 3-month standard should be revised. Based on the joint review, DOD decided to retain the 3-month carryover standard for all working capital fund activity groups except Air Force contract depot maintenance. Furthermore, as a result of the review and concerns expressed by the Navy, DOD also approved several policy changes that had the effect of increasing the carryover standard for all working capital fund activities. Specifically, under the policy implemented after the 1996 review, certain categories of orders, such as those from non- DOD customers, and contractual obligations, such as SPAWAR system centers’ contracts with private sector firms for research and development work, can be excluded from the carryover balance that is used to determine whether the carryover standard has been exceeded. These policy changes were documented in an August 2, 1996, DOD decision paper that provided the following formula for calculating the number of months of carryover. (See fig.1.) DOD’s 1996 decision to allow certain categories of orders to be excluded (adjustments) from reported gross carryover has had a significant impact on SPAWAR systems centers’ reported carryover, particularly the adjustment for contractual obligations. As table 3 shows, these adjustments have allowed the systems centers to significantly reduce actual reported gross carryover by hundreds of millions of dollars, resulting in reported carryover below the 3-month standard. As discussed below, we do not agree with how the Navy interpreted DOD’s guidance for using contractual obligations and related revenue in calculating carryover. Our analysis of the systems centers’ adjustments to their carryover amounts shown in table 3 found that contractual obligations accounted for 75 percent to 89 percent of the dollar adjustments made. In May 2001, we reported that the months of carryover reported by Navy activity groups, which include the SPAWAR systems centers, would more accurately reflect the actual backlog of in-house work if adjustments for contract obligations affected both contract carryover and contract revenue. As shown in figure 1, DOD’s formula for calculating months of carryover is based on the ratio of adjusted orders carried over to revenue. The formula specifies that gross carryover should be reduced by the amount of contract obligations. However, DOD did not provide clear guidance on whether downward adjustments for the revenue associated with contract services should also be made. Unless this is done, the number of months of reported carryover will be understated. In our May 2001 report we recommended, among other things, that the revenue used in calculating months of carryover be adjusted (reduced) for revenue earned for work performed by contractors. However, as discussed below, until recently DOD had not changed its policy for calculating carryover. As a result, the Navy did not adjust the revenue amount used in the denominator of the calculation and, therefore, continued to understate its reported carryover in its budget submissions to the Congress through fiscal year 2003. Navy officials informed us that they used total revenue in their calculation because total revenue represents the full operating capability of a given activity group to accomplish a full year’s level of workload. Further, even though Navy officials acknowledged that the revenue amount used in the calculation includes revenue earned from contracts, they stated the reason for not removing contract-related revenue from the denominator of the calculation was that the numerator of the calculation includes carryover (funds) related to work for which contracts would eventually be awarded but which had not yet been awarded at fiscal year-end. In addition, Navy officials told us that the accounting systems cannot readily break out what portion of the total revenue amount is contract-related. They further told us that the revenue information can be extracted from the system, but doing so involves a lot of work to develop the program(s) necessary to obtain the information. When the Navy reduces the dollar amount of carryover (numerator) by the amount of contractual obligations and does not reduce the revenue amount (denominator) for revenue associated with contracts, it is not being consistent with the use of adjustments in the formula to calculate carryover. Because the Navy cannot readily determine the amount of contract-related revenue, we asked SPAWAR headquarters to estimate what the amount would be for the systems centers based on the same criteria they use to determine the dollar amount of contractual obligations to be deducted in the carryover calculation. SPAWAR’s estimate shows that 63 percent of the total revenue amount used in calculating the SPAWAR systems centers’ number of months of actual carryover reported for fiscal year 2002 is related to revenue associated with contractual services. By not reducing total revenue used in the calculation for revenue related to work performed by contractors, the systems centers’ reported months of carryover for that fiscal year were understated. In response to our May 2001 report, the Under Secretary of Defense (Comptroller) agreed that the methodology for calculating carryover needed to be revised. In December 2002, the Under Secretary of Defense (Comptroller) issued new guidance on carryover for working capital fund activities. Under the revised methodology, the formula shown in figure 1 has been eliminated and, therefore, working capital fund activities can no longer reduce reported carryover by the amount of their contractual obligations. DOD adopted the revised methodology for the Defense Working Capital Fund fiscal year 2004 budget estimates, but DOD has not yet issued written procedures to ensure that the services consistently implement the new policy. DOD officials informed us that they are developing the procedures and will update the appropriate regulations in 2004. We did not evaluate DOD’s revised carryover policy. We also found that the systems centers reduced reported carryover by simply making accounting entries that took work to be performed by the working capital fund and turned it into work to be performed outside the working capital fund. Customer work that is performed by the working capital fund is referred to as reimbursable work. Customer work that is not performed by the working capital fund is referred to as direct cite work. Under the direct cite method of performing work, the working capital fund acts as an agent to get the work done through a private sector contractor. Customer funds that finance work done on a direct cite basis are not included in the working capital fund. Instead, the customer uses the direct cite funds to directly pay private sector contractors for the work performed rather than reimbursing or paying the working capital fund. Because the funds for direct cite work are not part of the working capital fund, there is no carryover associated with this work. Therefore, the work is not subject to DOD’s 3-month carryover standard. The two SPAWAR systems centers made some accounting entries at fiscal year-end that moved customer orders out of the working capital fund for the sole purpose of reducing reported carryover below the 3-month standard, which understated the amount of carryover that SPAWAR reported to the Navy and DOD. They then reversed these accounting entries in the beginning of the next fiscal year. Specifically, the systems centers did this at fiscal year-end 2000 for customer orders totaling at least $38 million and at fiscal year-end 2001 for orders totaling at least $50 million. SPAWAR systems centers’ officials acknowledged that these accounting adjustments were made at fiscal year-end to reduce reported carryover. The officials told us that this has been a long-standing practice and was used as a “tool” to manage reported carryover. For example, comptroller officials at one systems center told us that as the fiscal year- end grew near, they had a good idea of how much they needed to move from reimbursable to direct cite in order to get down below the 3-month carryover standard. At year-end, if it was determined that they moved more funds than needed to get below the standard, they would move the excess back to reimbursable before the accounting period was officially closed. We do not view these actions as a tool for managing workload as reflected by the reported carryover but as a misrepresentation of actual carryover balances in order to mislead decision makers, including DOD budget officials and the Congress. After discussing this practice with SPAWAR headquarters officials, they issued guidance in September 2002, prohibiting the use of reimbursable/direct cite accounting adjustments to mask year- end carryover balances. In discussing this with Navy headquarters and DOD officials, they told us that they were not aware that the systems centers were doing this and that they did not agree with this practice. In addition to understating budgeted and reported actual carryover information, the two SPAWAR systems centers’ actual carryover data that were reported to the Congress as part of the President’s budget were based on some unreliable underlying financial data. Although many factors could have contributed to this data problem, a primary cause was that the two centers had not fully complied with DOD guidance that required them and all other DOD fund holders to conduct tri-annual reviews of their financial data (outstanding commitments, obligations, and accrued expenditures). In fact, although DOD established its tri-annual review requirement in 1996 in order to improve the timeliness and accuracy of its financial data, the Charleston and San Diego Systems Centers did not conduct their first reviews until September 2001 and September 2002, respectively. Further, as of September 2002, the systems centers were fully complying with only a few of the 16 specific tasks that they were required to accomplish during their reviews. As discussed below, three carryover-related problems with the two systems centers’ tri-annual reviews are that the centers (1) excluded about 46 percent of their reported actual carryover from their September 2002 tri- annual reviews, (2) were not effectively reviewing dormant obligations and, therefore, were sometimes returning unneeded funds to customers after the funds had expired, and (3) were not effectively reviewing accrued expenditure data (accrued expenditures reduce carryover). A fourth problem was that neither SPAWAR headquarters nor the systems centers’ commanders had developed effective policies and procedures for ensuring that (1) tri-annual reviews are conducted in accordance with DOD guidance and (2) timely and appropriate corrective action is taken on problems that are identified during the reviews. The May 1996 memorandum from the Under Secretary of Defense (Comptroller) that established DOD’s tri-annual review requirement noted that the timely review of commitments and obligations to ensure the accuracy and timeliness of financial transactions is a vital phase of financial management. To illustrate this point, the Under Secretary stated that the accurate recording of commitments and obligations (1) forms the basis for formal financial reports issued by the department and (2) provides information for management to make informed decisions regarding resource allocation. Carryover-related budget decisions are examples of resource allocation decisions that require reliable obligation data. This is because there is a direct link between the (1) carryover data that working capital fund activities report to the Congress and DOD decision makers and (2) obligation data contained in the accounting records of working capital fund activities and their customers. Specifically, when working capital fund activities, such as the SPAWAR systems centers, accept customer orders, obligations are created in the customers’ accounting records, and the systems centers become the “fund holders” and as work is performed and customers are billed, both the unliquidated obligation balances in the customers’ accounting records and the working capital fund activities’ reported carryover balances are reduced. DOD’s implementing guidance for the tri-annual reviews requires fund holders, such as the two SPAWAR systems centers, to certify that they completed 16 specific tasks during their reviews. For example, the guidance requires fund holders to confirm, among other things, that they have (1) traced the obligations and commitments that are recorded in their accounting systems back to source documents and (2) conducted adequate follow-up on all dormant obligations and commitments to determine if they are still valid. Additionally, the guidance requires fund holders to (1) identify the problems that were noted during their reviews, (2) advise their higher headquarters—SPAWAR headquarters for the two systems centers—whether, and to what extent, adjustments or corrections to remedy noted problems have been taken, (3) summarize, by type, the actions or corrections remaining to be taken, (4) indicate when such actions/corrections are expected to be completed, and (5) identify the actions that have been taken to preclude identified problems from recurring in the future. Thus, if properly implemented, tri-annual reviews can provide a systematic process that helps fund holders not only improve the reliability of their financial data but also identify and correct the underlying causes of their data problems. As noted previously, DOD established the tri-annual review requirement in May 1996, but the Charleston and San Diego Systems Centers did not conduct their first reviews until September 2001 and September 2002, respectively. Discussions with SPAWAR officials and the centers’ financial managers indicated that a lack of management emphasis is the primary reason for this delayed implementation. For example, SPAWAR headquarters officials pointed out that the Navy’s implementing guidance was not issued until July 2001—more than 5 years after DOD established the requirement, and San Diego Systems Center financial managers stated that they were not aware of the tri-annual review requirement until fiscal year 2001. Further, when Charleston and San Diego financial managers were asked why their centers did not conduct their first tri-annual reviews until the end of fiscal year 2001 and 2002, respectively, they stated that their personnel were busy reconciling data problems that were caused by multiple organizational consolidations and accounting system conversions, and indicated that their personnel did not have time to conduct tri-annual reviews. The SPAWAR systems centers’ reported actual carryover falls into two major categories—obligated carryover and unobligated carryover. Obligated carryover refers to the portion of customer orders for which the systems centers have obligated their own funds. For example, if a customer submits a $1,000 order for engineering services, and a contractor will accomplish 10 percent of the work, then the systems center will award a contract for $100—which will obligate the center’s funds—and the $100 will, therefore, be referred to as obligated carryover. A customer order’s unobligated carryover balance is calculated by subtracting obligated carryover from the total amount remaining on the order—or $900 for this example. As of September 30, 2002, the two SPAWAR systems centers had about $896.1 million of reported actual carryover—$379.5 million of obligated carryover and $516.6 million of unobligated carryover. The distinction between obligated carryover and unobligated carryover is important because (1) neither DOD nor Navy guidance explicitly requires the systems centers to review unobligated carryover during their tri-annual reviews (unless the work is recorded as a commitment in their accounting records) and (2) about $414 million of the systems centers’ September 30, 2002, unobligated carryover was not recorded as a commitment in the centers’ accounting records. In other words, even if the tri-annual reviews were performed effectively and in a timely manner, they would not cover about 46 percent of the systems centers’ reported actual carryover. DOD guidance does require customers, as part of their tri-annual reviews, to validate the orders they have placed with working capital fund activities because these orders are recorded as obligations in their accounting records, regardless of whether they are obligated or unobligated carryover in the working capital fund activities’ records. However, customers have limited visibility over whether the unobligated portion of their funded orders are needed to finance future work, and, therefore, the working capital fund activities are in a better position than the customers to make this determination. If the systems centers were required to review unobligated carryover balances when performing their tri-annual reviews, they could (1) reduce the amount of carryover on their records and (2) better identify unneeded funds and be in a better position to return them to customers before the funds expired so the customers could use them for new obligations. For example, our review of 34 customer orders that (1) had $7 million of unobligated carryover balances as of September 30, 2001, and (2) were financed with funds that had already expired as of that date showed that most of the orders contained unneeded funds that were eventually returned to customers. Our analysis showed that (1) 27 of the orders (about 79 percent) had unneeded funds and (2) $2.9 million, or about 41 percent, of the orders we reviewed represented unneeded funds. Although most of the unneeded funds we identified were eventually returned to customers, in some instances the funds were not returned until long after the funds expired. For example, $469,916 of unneeded funds on two Charleston Systems Center orders expired in September 2001, but was not returned to the customer until September 2002—almost 1 year after the funds had expired. Similarly, $71,718 of unneeded funds on a San Diego order expired in September 1998, but was not returned to the customer until December 2002—more than 4 years after the funds had expired. We believe, and a senior DOD accounting official agreed, that the systems centers and other working capital fund activities should be required to validate their unobligated carryover during tri-annual reviews because, as noted previously, they have better visibility over whether unobligated funds will be needed in the future. However, neither center requires its managers to review unobligated carryover during the tri-annual reviews because, as financial managers at one center pointed out, they are concentrating on the requirements explicitly identified in the DOD guidance, and they will add other tasks, such as reviews of unobligated carryover, if and when (1) the guidance is changed or (2) they have the time and resources to do so. A key element of the tri-annual reviews is the requirement to follow up on all obligations that have been dormant for more than 120 days to determine if unused funds are still needed. This task is one of the 16 tri-annual review requirements and is important from the systems centers’ perspective because the identification and return of unneeded funds to the customer will reduce the centers’ reported carryover—thereby reducing the likelihood of customers’ budget cuts. Additionally, the task is important from the customers’ perspective because the funds can be reused for other purposes if they are returned before they expire. However, our analysis of the two centers’ financial data and review of individual customer orders showed that neither center was effectively identifying unneeded funds and returning them to customers in a timely manner. For example, our analysis of the two systems centers’ financial data showed that, as of September 30, 2002, the two centers had thousands of obligated carryover balances, valued at more than $7 million, that had not changed for more than a year. Further, some of these dormant balances were financed with customer funds that had long since expired. For example, 165 of the dormant carryover balances were financed with fiscal year 1996 or earlier appropriations. According to a systems center official, the monumental financial workload involved with the acquisition of additional activities and the transition to a consolidated financial accounting system occurring over the past several years greatly hindered their efforts to close all expired funding documents and return the unused funds to customers in a timely manner. For example, the official pointed out that the center had almost 13,000 old funding documents needing to be reconciled and closed at the start of fiscal year 2000 because of these problems and that the center was still working on them. At the conclusion of their tri-annual reviews, fund holders are required to certify that they have conducted adequate research on all accrued expenditures that are more than 120 days old to determine if they are valid. This task is important because large accrued expenditure balances, in general, and large dormant accrued expenditure balances, in particular, can indicate either serious accounting problems or ineffective procedures for developing accrued expenditure schedules and accrued expenditures reduce reported carryover balances, and overly optimistic accrued expenditure schedules can, therefore, cause reported carryover to understate actual carryover. The task of validating accrued expenditures is especially important for the two SPAWAR systems centers because they had about $673 million of accrued expenditures as of September 30, 2002. However, the San Diego Systems Center, which had the larger accrued expenditure balance—about $423 million as of September 2002—is currently developing a methodology for validating its accrued expenditures. Further, although the Charleston Systems Center had developed a methodology to review its accrued expenditures, the Charleston Comptroller was concerned about the timeliness and adequacy of these reviews and, therefore, was unwilling to certify that the center adequately reviewed its dormant accrued expenditures. Although the tri-annual review’s tasks related to accrued expenditures focus primarily on accounting problems, reviews of dormant accrued expenditures are also important from a carryover perspective. Overly optimistic accrued expenditure schedules—which are the basis for determining when accrued expenditures will be recorded in the accounting system—can cause reported carryover to understate actual carryover. For example, if a contractor is to perform $600 of work, and an accrued expenditure schedule is based on the assumptions that the work will begin immediately and will be performed at a uniform rate over a 6-month period, then (1) $100 of expenditures will be accrued each month and (2) each accrued expenditure will trigger a $100 customer payment and, in turn, a $100 reduction in the reported carryover. Thus, after 4 months, the reported carryover will be $200, regardless of how much work has actually been accomplished. If the work begins later than expected or if it takes longer than expected to complete, and accrued expenditures are not adjusted accordingly, reported carryover would be understated. Two ways to put the magnitude of the systems centers’ accrued expenditure balances in perspective are to (1) compare the balances with other financial indicators and (2) show their impact on reported carryover. For example, the San Diego Systems Center’s September 2002 accrued expenditure balance of $423 million is the equivalent of about 32 percent of the orders the center received during fiscal year 2002 ($1.315 billion) and about 31 percent of the revenue it received during the year ($1.372 billion). The accrued expenditures allowed the center to reduce its reported carryover at the end of fiscal year 2002 by about 3.7 months. A San Diego Systems Center accounting official acknowledged that the center’s large accrued expenditure balance is a major area of concern. Specifically, this official indicated that the center’s large accrued expenditure balance is caused partly by delays in contractor and interfund billings, but acknowledged that there are other apparent problems that warrant attention. For example, the official said that the $405 million variance between the center’s September 30, 2002, accrued expenditure and accounts payable balances is an apparent problem that should be reviewed. However, the accounting official also pointed out that currently the center cannot analyze its accrued expenditures because its new accounting system, which has been tailored to meet its specific needs and is unique within DOD, cannot provide the data in a format that will allow it to do so. When asked what the San Diego Systems Center is doing to develop the data needed to effectively analyze its accrued expenditure data, the accounting official indicated that the center is developing a “data warehouse.” However, the official acknowledged that (1) they have just begun identifying the specific requirements for the data warehouse, (2) there will be many competing requirements, (3) due to resource constraints, the data warehouse will not be able to satisfy all of the center’s data analysis needs, and (4) they, therefore, do not know when or, for that matter, if they will ever have the data they need to effectively analyze their accrued expenditures. In addition to the major problems identified above, our review of the procedures that SPAWAR headquarters and its two systems centers use to conduct their tri-annual reviews identified several areas that need improvements. For example, SPAWAR headquarters has not evaluated the systems centers’ reviews and, as a result, the command (1) does not have a sound basis for assessing the adequacy of the reviews that the centers have conducted on individual obligation, commitment, and accrued expenditure balances and (2) was not aware of the process-related problems discussed below. The San Diego Systems Center accomplishes its tri-annual reviews on a decentralized basis. During the first step of the process, the Office of the Comptroller, which has overall responsibility for the reviews, develops computer lists that contain information on all of the center’s outstanding obligations and commitments. The Comptroller’s Office then provides these lists to the center’s technical departments, which are then required to conduct the actual reviews. When the technical departments finish their reviews, their department heads certify that the reviews have been completed and then forward this certification to the San Diego Systems Center’s Comptroller. On the basis of the technical departments’ certifications, the Comptroller then certifies that the center has completed its review. Although this approach seems reasonable on the surface, we found numerous problems with the process. For example, because the systems center’s draft tri-annual review guidance does not specifically require the technical departments to accomplish many important tasks, the effectiveness and usefulness of the reviews varied significantly from one department to another. For example, two of the center’s technical departments did not (1) summarize or analyze the results of their reviews, (2) establish internal controls to ensure that timely and appropriate corrective action was taken on problems that were identified during the reviews, or (3) maintain adequate documentation to show who conducted the reviews, what problems were identified, and/or what additional actions were required. Conversely, although it was not required to do so, another department (1) summarized the results of its reviews in a single Excel spreadsheet to facilitate analysis of the review results, (2) analyzed the data to determine if there were any indications of systemic or compliance problems (e.g., inadequate reviews by one or more of the department’s divisions or problems with accrual schedules), and (3) developed internal control procedures to ensure that timely and appropriate action was taken on identified problems and/or unresolved research requirements. Additionally, this department requires its managers to maintain documentation that (1) shows who conducted the actual reviews (so these individuals can be held accountable for the adequacy of the reviews), (2) identifies the additional research or corrective action that is required as a result of the reviews, and (3) indicates who is responsible for taking the action. Managers from this department said that they were initially skeptical about the benefits of the tri-annual reviews, but indicated that they are now strong supporters because the reviews have provided a structured way to address their data problems and have already resulted in significant improvements in the quality of their data. Additionally, they acknowledged that documenting what corrective action is required and who is responsible for taking it requires additional time, at least in the short term. However, they believe this documentation is essential for (1) holding people accountable and (2) having effective internal controls to ensure that timely and appropriate corrective action is taken on the problems that are identified. Further, they believe that the documentation may save time in the long term because it will serve as a “memory jogger” for subsequent reviews. Additional process-related problems we identified during our assessment of the San Diego Systems Center’s tri-annual review process include the following. As noted previously, although the center had about $423 million of accrued expenditures as of September 2002, it had not yet developed a methodology for identifying and reviewing its accrued expenditures. Fund holders are required to conduct sufficient follow-up on dormant obligations and commitments to determine if they are still valid. However, the computer lists that the San Diego Comptroller provides to the center’s technical departments do not distinguish between the obligations and commitments that have been dormant and those that have not. As a result, the technical departments have no way to focus their attention on the obligations and commitments that require follow- up action. The certifications that the department heads sign are much more general than the one that the Comptroller must sign on behalf of the system center and they, therefore, do not provide an adequate basis for the Comptroller’s certification. For example, the Comptroller is required, among other things, to (1) advise SPAWAR headquarters whether, and to what extent, adjustments or corrections to remedy noted problems have been taken, (2) summarize, by type, the actions or corrections remaining to be taken, (3) indicate when such actions/corrections are expected to be completed, and (4) identify the actions that have been taken to preclude identified problems from recurring in the future. However, the Comptroller does not require the departments to report this information to him and, therefore, cannot report this information to SPAWAR headquarters. Although, as noted previously, the Comptroller has overall responsibility for the center’s tri-annual reviews, his office has not assessed the adequacy of the reviews that are being conducted by the technical departments. As a result, the Comptroller does not have a sound basis for his certification. The Charleston Systems Center has developed a basic approach for its tri- annual reviews that appears sound. Charleston’s approach addressed several of the concerns we noted with the San Diego Systems Center’s approach. First, rather than assigning all review requirements to the technical departments, Charleston divides the responsibilities between the Comptroller’s Office and the technical departments. This approach allows the Comptroller’s Office to concentrate on the tasks it is best qualified to perform, such as tracing obligations back to source documents, and lets the technical departments concentrate on those tasks that they are best qualified to perform, such as verifying that dormant obligations are still valid. Second, the Charleston Comptroller provides the technical departments with a list of all dormant commitments, obligations, and accrued expenditures so they can easily focus on those that they must follow up on. Finally, Charleston’s tri-annual review guidance requires those who conduct the reviews to document actions taken during the reviews and is to (1) include corrective actions remaining to be taken and when such actions will be completed and (2) identify actions that have been taken to preclude identified problems from recurring in the future. However, we did identify several problems with Charleston’s overall approach. More specifically, we found the following: Although the Comptroller must sign a certification statement attesting to the results of the center’s tri-annual review, the systems center has not conducted all of the required reviews, and the Comptroller has not developed internal control procedures to ensure that the reviews that were conducted were performed properly and completely. Charleston’s technical department heads are responsible for ensuring that reviews are properly conducted and documented, but they are not required to certify that this has been done. Consequently, the Comptroller does not have a sound basis for certifying that the tri- annual review tasks the center is required to accomplish have been completed. In fact, Charleston’s Comptroller acknowledged that our work shows that the technical departments’ reviews are not adequate, and he indicated that his concern about the timeliness and adequacy of the technical departments’ reviews is the reason why he has limited his tri-annual review certification to the 4 tasks that are under his control and why he has been unwilling to certify the remaining 12 tasks. The Comptroller stated, and we agree, that department heads should be held accountable for their respective departments’ portion of the tri-annual review process. Specifically, he believes they should be required to complete and sign certification statements similar to the one that he must complete and sign on behalf of the systems center, and accordingly, has developed a proposed certification statement for the department heads to sign. We also found that DOD’s tri-annual review guidance regarding the dollar thresholds for reviewing outstanding commitments and obligations was unclear. The guidance states that during the January and May reviews, commitments and obligations of (1) $200,000 or more for investment appropriations (e.g., procurement funds and the capital budget of the working capital funds) should be reviewed and (2) $50,000 or more for operating appropriations (e.g., operation and maintenance funds and the operating portion of the working capital funds) should be reviewed. Charleston interpreted the guidance to mean that customer orders—which are the operating portion of the working capital fund—financed with investment funds fell into the $200,000 threshold category for review purposes, rather than the $50,000 category, and conducted its tri-annual reviews accordingly. In discussing this issue with the Office of the Under Secretary of Defense (Comptroller) and Navy headquarters officials, the officials acknowledged that the guidance was unclear and, thus, open to interpretation. They stated that the guidance needed to be examined and clarified. SPAWAR has consistently understated and provided misleading carryover information to the Congress. Reliable carryover information is essential for the Congress and DOD to perform their oversight responsibilities, including reviewing DOD’s budget. To provide assurance that SPAWAR systems centers report reliable carryover information, managers at SPAWAR headquarters and the systems centers must be held accountable for the accuracy of reported carryover and ensure the timely identification of unneeded customer funds. This includes increased management attention that would provide more assurance that the systems centers are effectively reviewing funded orders as part of their tri-annual review process. Until these problems are resolved, the Congress and DOD decision makers will be forced to make key budget decisions, such as whether or not to enhance or reduce customer budgets, based on unreliable information. We recommend that the Secretary of Defense direct the Secretary of the Navy to issue guidance to all Navy working capital fund activities, including SPAWAR, that prohibits them from deobligating reimbursable customer orders at fiscal year-end and reobligating them in the next fiscal year for the sole purpose of reducing carryover balances that are ultimately reported to the Congress; direct the Under Secretary of Defense (Comptroller) to determine the extent to which working capital fund activities throughout DOD may be similarly manipulating customer order data at fiscal year-end to reduce reported carryover and, if necessary, issue DOD-wide guidance prohibiting this practice as needed; and direct the Under Secretary of Defense (Comptroller) to develop and issue written procedures to implement the December 2002 carryover policy. To provide reasonable assurance that the dollar amount of orders to be received from customers in developing annual budgets are based on more realistic estimates, we recommend that the Secretary of the Navy direct the Commander of the Space and Naval Warfare Systems Command to compare budgeted to actual orders received from customers and consider these trends in developing the following year’s budget estimates on orders to be received from customers. We recommend that the Under Secretary of Defense (Comptroller) revise the tri-annual review guidance in the DOD Financial Management Regulation so that working capital fund activities are required to expand the scope of their tri-annual reviews to include unobligated balances on customer orders and review and clarify the tri-annual review guidance for the January and May reviews in the DOD Financial Management Regulation as it pertains to the dollar threshold for reviewing outstanding commitments and obligations for the capital budget and operating portion of the working capital fund. We recommend that the Commander of the Space and Naval Warfare Systems Command establish internal control procedures and accountability mechanisms that provide assurance that the systems centers are complying with DOD’s tri-annual review guidance. We also recommend that the Commander of the Space and Naval Warfare Systems Command direct the Commanders of the Charleston and San Diego SPAWAR Systems Centers to maintain documentation that shows who conducted the tri-annual reviews so that these individuals can be held accountable for the reviews; maintain documentation that identifies (1) any additional research or corrective action that is required as a result of the tri-annual reviews and (2) who is responsible for taking the action; require cognizant managers, such as department heads, to confirm in writing that they have (1) performed the required tri-annual reviews and (2) completed the related follow-up actions by signing a statement, such as the draft certification statement developed by the Charleston Systems Center Comptroller, that describes the specific tasks that were accomplished and provide this statement to the systems centers’ comptrollers; develop and implement internal control procedures to provide assurance that tri-annual reviews of individual commitment, obligation, and accrued expenditure balances are adequate; and develop policies and procedures to capture the information on tri- annual review results, such as the amount of obligations reviewed, confirmed, and revised, that they are required to report to SPAWAR headquarters and that SPAWAR headquarters, in turn, is required to report to Navy headquarters. We recommend that the Commander, San Diego SPAWAR Systems Center direct the Center Comptroller to develop and implement a methodology for identifying and analyzing identify dormant commitments, obligations, and accrued expenditures in the tri-annual review computer lists that are provided to the technical departments. DOD provided written comments on a draft of this report. In its comments, DOD concurred with 12 of our 14 recommendations and partially concurred with the remaining 2 recommendations. For these 2 recommendations, DOD agreed with our intent to ensure that obligations, unobligated balances, and commitments are reviewed regularly to ensure effective use of funds. Our evaluation of DOD’s comments is presented below. DOD’s comments are reprinted in appendix II. For the 12 recommendations with which DOD concurred, it stated that 7 of them were completed based on the issuance of SPAWAR Instruction 7301.1A on Tri-Annual Reviews of Commitments and Obligations, dated October 9, 2002. We believe that the guidance provided in the instruction is an important step. SPAWAR and the systems centers now need to develop and issue implementing procedures because, in most cases, the guidance provided in the instruction that is related to these 7 recommendations is too general to fully address our recommendations. For example, although the instruction requires those responsible for conducting the review to report the results to the systems center’s comptroller, the instruction does not require, as we recommended, that cognizant managers, such as department heads, sign a written statement to be provided to the comptroller to confirm that they have performed the required reviews and certify the results of those reviews. Further, in concurring with our recommendation that SPAWAR compare budgeted to actual orders received from customers and consider these trends in developing budget estimates on orders to be received from customers, DOD did not state how the Navy would ensure that SPAWAR’s budget estimates would accurately reflect orders to be received from customers. In its comments, DOD stated that the Navy will continue to refine its budget estimates for customer orders. We believe that the Navy must take additional actions to develop more reliable budget estimates. As noted in our report, reported actual customer orders received exceeded budget estimates from 36 percent to 88 percent during fiscal years 1998 through 2002. For example, for fiscal year 2002, in formulating its budget request, the Navy expected the SPAWAR systems centers to receive about $1.3 billion in customer orders, but the Navy reported that the centers actually received about $2.4 billion in customer orders—a difference of $1.1 billion, or about 88 percent. Having reliable budget estimates on customer orders to be received is critical since this information is used in calculating carryover using DOD’s new carryover policy. DOD partially concurred with our recommendation that it revise its tri- annual review guidance in the DOD Financial Management Regulation to require working capital fund activities to expand their tri-annual reviews to include unobligated balances on customer orders. In its comments, DOD stated that reviewing such balances during the tri-annual reviews was the responsibility of the customer who placed the order with the working capital fund and that the working capital fund activity should work in cooperation with the customer to ensure that unobligated balances are reviewed. We agree that the working capital fund activity should work in conjunction with customers to review unobligated balances. However, as stated in our report, working capital fund activities are in the best position to determine whether unobligated balances are still needed to finance future work. To ensure that unobligated balances are properly reviewed during the tri-annual review process, we continue to recommend that the DOD Financial Management Regulation be revised to specify the working capital fund activities’ role in reviewing unobligated balances on customer orders. DOD also partially concurred with our recommendation for the SPAWAR systems centers to review all balances related to dormant customer orders in excess of $50,000 during the January and May tri-annual reviews. In its comments, DOD indicated that the current guidance is not clear with regard to whether all such dormant balances over $50,000 are to be reviewed during the specified months. DOD stated that it will review the guidance, as it pertains to working capital fund activities, and make adjustments if appropriate. We agree that DOD’s tri-annual review guidance regarding the dollar thresholds for reviewing outstanding commitments and obligations was unclear. We have revised our report accordingly, including the related recommendation, to reflect that DOD’s tri-annual review guidance was unclear. In addition, in the cover letter transmitting its comments on our draft report, DOD took exception to our discussion in the draft report regarding the methodology used by Navy to determine the levels of carryover— reducing the numerator in the carryover formula by the amount of contractual obligations, but not reducing the formula’s denominator by the amount of revenue earned from contractual services. Because DOD revised its methodology for calculating carryover in December 2002, DOD commented that such a discussion in the report was irrelevant and confusing to the reader and recommended that it be deleted. We disagree with DOD’s comment. Although DOD revised its methodology for calculating carryover, it was not incorporated into Navy’s budget submissions until fiscal year 2004. When we undertook this review in July 2002, one of our objectives was to determine if reported carryover accurately reflected the amount of work remaining to be accomplished. As such, this issue was and still is relevant. As stated in this report, our May 2001 report recommended that the revenue used in calculating carryover be adjusted (reduced) for revenue earned for work performed by contractors. Unless this is done, reported carryover will be understated. The Navy did not adjust the revenue amount and, therefore, continued to understate its reported carryover in its budget submissions to the Congress. We continue to believe that this is a reportable issue and have made a related recommendation for DOD to develop and issue written procedures to implement the December 2002 carryover policy. Further, we believe this issue remains of interest to the Congress since the Navy has understated SPAWAR’s reported carryover from fiscal year 1998 through fiscal year 2002. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services; the Subcommittee on Readiness and Management Support, Senate Committee on Armed Services; the Subcommittee on Defense, Senate Committee on Appropriations; the House Committee on Armed Services; the Subcommittee on Readiness, House Committee on Armed Services; and the Ranking Minority Member, Subcommittee on Defense, House Committee on Appropriations. We are also sending copies to the Secretary of Defense, the Secretary of the Navy, and other interested parties. Copies will be made available to others upon request. Should you or your staff have any questions concerning this report, please contact Gregory D. Kutz, Director, at (202) 512-9505. He can also be reached by E-mail at [email protected]. An additional contact and key contributors to this report are listed in appendix III. To determine if differences existed between the budgeted and reported actual gross carryover and, if so, the reasons for the variances, we obtained and analyzed budget and accounting documents that provided information on budgeted and reported actual gross carryover and orders received from customers from fiscal year 1998 through fiscal year 2002. When variances occurred between the budgeted and reported actual information, we met with accounting and budgeting SPAWAR and Navy headquarters officials to ascertain why there were differences. We also discussed with officials what actions they were taking to develop more reliable budget information on carryover and orders received from customers. To determine if the reported actual carryover balances reflected the amount of work that remained to be accomplished, we obtained and analyzed the Department of Defense’s (DOD) regulations and guidance on carryover. We also obtained and analyzed the SPAWAR systems centers’ calculations for the fiscal year 1998 through fiscal year 2002 actual reported year-end carryover balances. We met with officials from SPAWAR and Navy headquarters to discuss the methodology they used to calculate carryover. We (1) obtained explanations about why the Navy made adjustments in calculating the dollar amount of carryover balances as well as the number on months of carryover and (2) determined the impact of those adjustments on the carryover figure. We also reviewed year-end transactions that affected the dollar amount and number of months of carryover. For these year-end transactions, we met with officials from SPAWAR and the two systems centers to determine why these transactions occurred at year-end. To determine if the Charleston and San Diego SPAWAR Systems Centers have the financial data they need in order to provide reliable data on actual carryover levels to DOD and congressional decision makers, we reviewed the policies and procedures SPAWAR headquarters and the two systems centers have used to implement DOD’s tri-annual review guidance. Specifically, we (1) reviewed the DOD, Navy, SPAWAR headquarters, and the two SPAWAR systems centers’ tri-annual review guidance and discussed it with cognizant individuals, (2) reviewed the tri-annual review certifications that the two systems centers have submitted since DOD issued its tri-annual review guidance in 1996, and discussed these certifications with cognizant individuals, (3) discussed the systems centers’ tri-annual review procedures with cognizant individuals, including those who actually accomplished the reviews, and (4) reviewed documentation on the results of the reviews. We also obtained data on the status of unfilled orders and carryover at the end of fiscal year 2001. Additionally, from these data, we selected and analyzed 34 orders that had outstanding carryover balances at the end of fiscal year 2001 to determine if the carryover balances accurately reflected the amount of work that remained to be performed. We selected orders that (1) were financed with expired appropriations and (2) were unobligated carryover at year-end since these orders were more likely to have unneeded funds and because a review of these orders was, therefore, more likely to identify problems with the systems centers’ review procedures. We performed our work at the headquarters offices of the Under Secretary of Defense (Comptroller) and the Assistant Secretary of the Navy (Financial Management and Comptroller), Washington, D.C.; Space and Naval Warfare Systems Command, San Diego, California; the Charleston Space and Naval Warfare Systems Center, Charleston, South Carolina; and the San Diego Space and Naval Warfare Systems Center, San Diego, California. The reported actual year-end carryover information used in this report was produced from DOD’s systems, which have long been reported to generate unreliable data. We did not independently verify this information. The DOD Inspector General has cited deficiencies and internal control weaknesses as major obstacles to the presentation of financial statements that would fairly present the Defense Working Capital Fund’s financial position for fiscal years 1993 through 2002. Our review was performed from July 2002 through June 2003 in accordance with U.S. generally accepted government auditing standards. The Navy provided the budgeting and accounting information referred to in this report. We requested comments on a draft of this report from the Secretary of Defense or his designee. DOD provided written comments, and these comments are presented in the Agency Comments and Our Evaluation section of this report and are reprinted in appendix II. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated June 10, 2003. 1. See the Agency Comments and Our Evaluation section of this report. 2. As discussed in the Agency Comments and Our Evaluation section of this report, we have modified this recommendation and the related section of the report in response to DOD’s comment. Staff who made key contributions to this report were Francine DelVecchio, Karl Gustafson, William Hill, Christopher Rice, Ron Tobias, and Eddie Uyekawa. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The Space and Naval Warfare Systems Command (SPAWAR) has hundreds of millions of dollars of funded work that its working capital fund activities did not complete before the end of the fiscal year. Reducing the amount of workload carryover at fiscal year-end is a key factor in the effective management of Department of Defense (DOD) resources and in minimizing the "banking" of funds for work to be performed in subsequent years. GAO was asked to analyze SPAWAR's carryover balances. GAO assessed the accuracy of the budgeted amounts, the accuracy of the reported actual carryover balance, and the reliability of underlying financial data on which reported actual carryover is based. The budgeted and reported actual amounts of SPAWAR gross carryover were consistently understated, resulting in the Congress and DOD decision makers not having reliable information to decide on funding levels for working capital fund customers. First, GAO found that SPAWAR centers' budgeted gross carryover for fiscal years 1998 through 2002 was significantly less than the reported actual year-end gross carryover. Budgeted gross carryover was understated primarily due to problems with estimating the underlying customer order data. For example, for fiscal year 2002, SPAWAR's budgeted amount for customer orders was 88 percent less than the reported actual orders received. Second, SPAWAR's reported actual carryover balances were also unreliable and adjusted downward by hundreds of millions of dollars. These adjustments understated carryover and resulted in Navy reports to the Congress showing that SPAWAR carryover balances for fiscal years 1998 through 2002 did not exceed DOD's 3-month carryover standard. SPAWAR was able to report reduced carryover balances for the following reasons. As GAO previously reported, the DOD guidance for calculating the number of months of carryover allowed carryover to be adjusted and understated. DOD agreed with GAO's previous recommendation and in December 2002 changed its carryover guidance. SPAWAR centers used accounting entries to manipulate the amount of customer orders for the sole purpose of reducing reported carryover below the 3-month standard. For example, the centers did this for at least $50 million at the end of fiscal year 2001. SPAWAR officials issued guidance in September 2002 discontinuing this practice. Finally, SPAWAR had not taken key steps to verify the underlying financial data on which reported actual carryover is based. The SPAWAR centers had only recently begun conducting the required tri-annual reviews of such data, which DOD has required since 1996. However, the reviews were ineffective, including the exclusion of slightly less than half of their reported actual carryover from the review process.
A user fee is a charge assessed to beneficiaries for goods or services provided by the federal government. In general, a user fee is related to some voluntary transaction or request for government goods or services—such as a passport or admission to a national park—above and beyond what is normally provided to the public. Taxes, on the other hand, arise from the government’s sovereign power to raise revenue and need not be related to any specific benefit. Unlike user fees, taxes are generally imposed to raise revenue to fund benefits for the general public, such as national defense. GAO has published a guide on the principles of effective user fee design. In this guide, we focused on criteria that have often been used to assess user fees and other government collections. Efficiency is whether the fee ensures that the government is providing the amount of the service that is economically desirable. Efficient fees increase awareness of the costs of government services, creating incentives to reduce costs where possible. In addition, efficient fees act as a measure of the service’s worth while meeting demand for the service. Equity is the extent to which everyone is considered to be paying a fair share (whether fairness is based on program use or the user’s ability to pay). Revenue adequacy is the extent to which the fee covers the intended share of service costs, and whether the fee revenue is stable enough to withstand short-term fluctuations in economic activity. GAO, Federal User Fees: A Design Guide, GAO-08-386SP (Washington, D.C.: May 29, 2008). Administrative burden is the cost of collecting and enforcing the fee, as well as any costs users incur to pay the fee. These criteria interact and are often in conflict with each other; as such, trade-offs exist when considering among the criteria to design a fee. The weight that different policymakers may place on different criteria will vary, depending on how they value different attributes. To that end, understanding the trade-offs associated with different aspects of a fee’s design can provide decision makers with better information and support more robust deliberations about user fee financing. IRS charges user fees for various services that assist taxpayers in complying with their tax liabilities, clarify the application of the tax code to particular circumstances, and ensure the quality of paid preparers of tax returns, among others. IRS is organized into four operating divisions based on types of taxpayers: (1) Wage and Investment (W&I); (2) Large Business and International (LB&I); (3) Small Business and Self-Employed (SB/SE); and (4) Tax Exempt and Government Entities (TE/GE). Each division provides at least one service to taxpayers for which it charges a fee. In addition to these primary operating divisions, IRS has a number of offices, including Chief Counsel, Appeals, Professional Responsibility, and Return Preparer, that also provide services to taxpayers for a fee. Table 1 shows user fee services organized by business division or program office responsible for managing the user fee programs. Descriptions of the services and user fees, including fee rates, are included in appendix II. The CFO has oversight responsibilities for the initial assessment, updates, and collection of user fees. While the CFO does not provide the services for which user fees are charged, it does have the responsibility to ensure that user fees are appropriately collected, deposited, and reported. IRS derives its authority to charge user fees from either the Independent Offices Appropriation Act (IOAA) or various other specific statutes. Although OMB Circular A-25 states that agencies should recover full costs to the government to the extent permitted by law for providing the service or good, the authority for each fee determines how much of the costs IRS is required to recover. For example, IRS is directed by the circular to recover full costs for fees authorized by IOAA, unless an exception applies. Agencies, however, may request waivers from OMB to charge less than full cost if the fee would be unduly costly to collect or otherwise justifies an exception. IRS has waivers pending with OMB to charge less than full cost for installment agreements and offers in compromise. According to IRS officials, increasing these fees to full cost could discourage taxpayers from using these services. In contrast, when IRS is authorized to charge a user fee under a specific statute, the provisions of that statute govern. For example, IRS can charge a reasonable fee for income verification express service and U.S. residency certifications. determination letters, minimum rates are specified in the statute. Table 2 shows IRS user fees by authorizing legislation and illustrates that the authority to charge a user fee can be separate from the authority to provide the good or service. Unlike IOAA-derived user fees that require full cost recovery, IRS is not required to recover full cost for user fees authorized by specific statutes which permit IRS to charge what it considers to be reasonable, or fair. Income verification express service provides 2- business-day processing and electronic delivery of tax return transcripts for users, such as mortgage lenders and other financial market entities, to confirm the income of a borrower (taxpayer) during the processing of a loan application. Includes advance art determination letters, pre-filing agreements, foreign insurance excise tax exemptions, competent authority limitation on benefits determination letters, and Office of the Chief Counsel and TE/GE division letter rulings and determination letters. In fiscal year 2010, IRS collected and retained $290 million in user fees and transferred an additional $62 million of user fee collections to the U.S. Treasury General Fund (General Fund). Upon collection, user fees are recorded into various IRS financial systems and are deposited either into IRS’s Miscellaneous Retained Fees (MRF) Fund for use by the agency, the General Fund, or both. The allocation of fees between funds is determined by a formula specified in the Treasury, Postal Service and General Government Appropriations Act of 1995 (1995 Treasury Appropriations Act), which was based on the fees and fee amounts in effect at the time.prior to September 30, 1994, such as enrolled actuary, are divided between IRS’s MRF Fund and the General Fund. User fees implemented after September 30, 1994, such as installment agreements, are fully retained by IRS. For example, collections from user fees that existed Table 3 provides IRS’s total retained user fee collections by category for the past 6 fiscal years. Installment agreements have generated the majority of IRS’s user fee collections. In fiscal year 2010, they accounted for approximately 68 percent ($198 million of $290 million) of IRS’s total retained user fee collections. A rate increase implemented in January 2007 resulted in a 55 percent increase (or from approximately $79 million to $122 million) in collections from fiscal year 2006 to 2007. According to IRS officials and as shown in table 3, despite the rate increase, taxpayers continued to initiate, restructure, or reinstate installment agreements at similar or higher levels. The second largest contributor to fee collections came from income verification express service, which accounted for approximately $40 million or 14 percent of fee collections in fiscal year 2010. Fee collections from income verification express service increased significantly from fiscal years 2007 to 2009, which IRS officials attributed to the turmoil in the financial and housing markets.in collection amounts can occur because of fee rate changes or because of other factors, such as programmatic changes or economic conditions. By volume of user fee services, income verification express service topped the list of services provided in fiscal year 2010 with approximately 16 million requests followed by installment agreements with about 3 million requests, as shown in table 4. IRS decreased the income verification express service fee rate from $4.50 to $2.25 in fiscal year 2010 because of the increased volume, which lowered the unit cost per income verification express service request. Installment agreement volume has also increased in recent years, despite the fiscal year 2007 fee rate increases. IRS has permanent, indefinite authority to obligate and spend user fee collections. This authority allows the agency independence and flexibility in the use of these funds. IRS can obligate user fee collections for any activity for which it has an appropriation with OMB approval, and it can carry over any unexpended fee collections for use in subsequent years. We have suggested that carryovers are one way agencies can establish reserves to sustain operations in the event of a sharp downturn in user fee collections or other events.reserve of unspent fee collections that can be used for critical needs. The carryover is what is left over after IRS transfers fee collections to supplement its appropriations in order to meet agencywide needs. Table 5 provides information on IRS’s MRF carryover balances, which have grown in recent years as fee activity has increased. At the end of fiscal year 2010, $288 million in fee collections was carried over for use in future fiscal years. As directed by the CFO Act of 1990 and OMB Circular A-25, IRS conducts a general review of its user fees on a biennial basis. To initiate the biennial review, IRS’s CFO issues a memorandum to senior management in relevant business divisions and program offices requesting that they validate the cost of providing services for which they charge a user fee and provide suggestions for new user fees. Based on the cost estimates developed and other considerations, senior management will propose whether to increase, decrease, or keep the current fee rates. These proposals are sent to the CFO and are then compiled and forwarded in a report to the Commissioner for final review and approval. IRS has made efforts to improve its cost estimation process in support of its user fees, with particular attention given to its largest fee by amount collected. For example, the Treasury Inspector General for Tax Administration’s 2006 audit concluded that installment agreement user fee cost estimates were based on incorrect assumptions and contained calculation errors and unsupported costs. To address these issues, IRS hired a contractor to assess the cost estimate for the fiscal year 2007 biennial review and re-hired the contractor to prepare the cost estimate for the fiscal year 2009 biennial review. For the fiscal year 2011 biennial review, the contractor was hired to simplify the fiscal year 2009 cost estimate. According to officials, IRS does not plan to use the contractor for future biennial reviews. According to officials, IRS has also taken steps recently to improve the cost estimate for the offer in compromise user fee. For the biennial review in 2009, officials conducted interviews and developed a complex series of spreadsheets to estimate costs of the offer in compromise program. This process, which IRS described as “less precise and more reliant on estimates and averages,” estimated the full cost of the average offer in compromise agreement to be $2,132. Because the 2009 process was very labor-intensive, IRS assessed other options for gathering the information needed to prepare the cost estimate. As a result of this assessment, IRS simplified its cost estimation process while making it easier to collect information about program costs. The revised fiscal year 2011 review process resulted in an estimated full-cost user fee of $2,718, an approximate 22 percent increase over that calculated in the 2009 biennial review. According to IRS, the 2009 cost calculations were likely significantly understated, and its 2011 process for capturing offer in compromise costs will yield consistently reliable data going forward. Based on the biennial review, IRS has also revised fee rates to reflect the updated cost of providing various services, such as income verification express service and Chief Counsel letter rulings. IRS has omitted three fees—advance art determination letters, reproduction of tax returns, and special statistical studies and compilations—from its biennial review and contrary to the requirements of the CFO Act. According to an IRS official, the cost estimate used to support the advance art determination letter user fee has not been reviewed or updated since fiscal year 1996 because of very low demand (fewer than 20 per year from fiscal year 2005 to fiscal year 2010).also omitted from the biennial review the user fees for reproduction of tax returns and special statistical studies and compilations because it interpreted these fees as not being covered by the biennial review requirement. However, the CFO Act requires IRS to review, on a biennial basis, the fees, royalties, rents, and other charges imposed by IRS for services and things of value it provides. guidance on conducting a biennial review of user fees regardless of the authorizing statute to the extent permitted by law. Since there is no alternative statutory review process for these fees, they fall under the review requirements of the CFO Act and guidelines of OMB Circular A-25. Furthermore, as articulated in the accounting standards for the federal government, Congress and federal executives need cost information to make decisions about allocating federal resources, modifying programs, and evaluating program performance. 31 U.S.C. § 902(a)(8). always be cost-effective given the amount collected from these fees, it may be possible to perform a simplified update during some review cycles. According to IRS officials, the CFO’s office works closely with business divisions and program offices to develop cost estimates, provides guidance on data to include in the cost estimates, and develops Internal Revenue Manual guidelines to document IRS’s process for setting and reviewing user fees. In a recent report, we found that IRS set the user fee for the preparer tax identification number in accordance with established guidelines for cost estimation. However, for IRS’s smaller user fee programs, we found that data or assumptions used in cost estimates were not always well documented or understood by officials. For example, for the LB&I fees, the fiscal year and data source for some of the fiscal year 2009 cost estimate components were not documented. In another case, we noted some confusion early in the fiscal year 2011 review process from managers in the W&I division about what assumption should be used to estimate the pay increase in the fiscal year 2011 cost estimate. They assumed that they should include a 5 percent pay increase in the fiscal year 2011 cost estimate for the U.S. residency certification fee when the CFO had determined that such an increase should not be included.$6.8 million in direct labor costs—in the cost estimate for the offer in compromise user fee, which the CFO’s office did not detect in its review of cost estimates. We also found a calculation error—double counting more than According to GAO’s cost estimation guide, assumptions used to estimate costs should be documented to include the rationale behind the assumptions and any historical data that support the assumption, and federal accounting standards require documentation of all managerial cost accounting activities, processes, and procedures used to associate costs with products, services, or activities. IRS’s Internal Revenue Manual sections on user fees and managerial cost accounting contain guidelines for developing cost estimates. However, IRS has not provided a documented set of assumptions and decision rules for program managers to follow in estimating costs of user fee program activities for each biennial review. Although it is the responsibility of IRS’s business units to develop the cost estimates, supplemental guidance or tools from the CFO’s office that provide a crosswalk between the cost estimation guidelines and specific cost estimate assumptions could potentially help staff avoid mistakes that would need to be caught later in the review process. Finally, because these tools could provide more specific instructions on assumptions to be used and documenting data sources, IRS can better ensure consistency of its cost estimates across user fee programs. In setting user fee rates, IRS officials said they consider taxpayer burden, administrative costs, and potential effects on taxpayer compliance, among other factors, when determining whether to recover full cost. For example, IRS set the U.S. residency certification fee rate at $35 rather than $110, the estimated fiscal year 2009 cost of providing the service. Based on anecdotal information, IRS decided that raising the fee rate could discourage taxpayers from applying for the certification, which could result in higher withholding of foreign income taxes and potentially lower income tax revenue to the U.S. Treasury. Similarly, IRS has kept the offer in compromise fee rate at $150 since the fee’s inception in 2003, even though IRS recently estimated that it costs more than $2,700 to implement the program per user. According to IRS officials, the fee rate has been kept at $150 to encourage taxpayers who owe taxes to work with IRS and resolve their tax bills. GAO’s user fee design guide outlines important factors to consider when setting user fee rates, including evaluating users’ ability to pay and the extent to which the general public and certain individuals benefit from the fee-based program. As described above, IRS officials consider factors other than cost recovery in setting fee rates. However, we found that IRS has not thoroughly documented these factors, corroborated anecdotal support with data analysis, or studied the effect of user fees on taxpayer behavior. A 2007 report by the National Taxpayer Advocate also found that IRS had not studied the effect of user fees on taxpayer behavior or demand for IRS services. Further, although officials said that changing the offer in compromise fee rate would be costly, IRS has not documented the extent to which changing this fee rate and others would impose additional costs or administrative burden on the agency. After reviewing fee rates, IRS publishes some of the revised fees in revenue procedures and other documentation. However, IRS’s fiscal year 2009 biennial report did not document whether the proposed fee rates discussed in the report had been approved by the IRS Commissioner. For example, in the report, IRS officials proposed a $5 fee increase from $35 to $40 for providing U.S. residency certification. When asked about the disposition of the proposed fee rates presented in the biennial report, officials initially stated that all the proposed fees in the biennial report had been approved and implemented, as appropriate. However, we found documentation that the user fee for U.S. residency certification remained at $35, which officials later confirmed. IRS officials acknowledged that they used an informal process to decide whether to go forward with the proposed fee increase, which led to miscommunication in determining which fee rate had been implemented. GAO’s standards for internal control in the federal government establish the need for clear documentation. standards, events should be promptly recorded to maintain their relevance and value to management in controlling operations and making decisions. Moreover, thoroughly documenting decisions and the rationale behind decisions becomes increasingly important as the federal workforce ages and the retirement of senior managers and staff risks the loss of valuable institutional knowledge. GAO, Standards for Internal Control in the Federal Government, GAO/AIMD-00-21.3.1 (Washington, D.C.: Nov. 1, 1999), 15. IRS has implemented several new user fees in recent years. As part of IRS’s fiscal year 2005 biennial review, the former IRS Commissioner tasked all business divisions and program offices with identifying potential new user fees to offset processing costs. Through this review, IRS identified several new user fees, including income verification express service and U.S. residency certification, which were implemented in fiscal year 2007. As a result of IRS establishing these new fees, Congress removed the limit on user fee collections that IRS could retain and reduced IRS’s annual appropriation by the estimated amount of user fees that would be collected. Since then, IRS has also identified three new user fees related to return preparers and implemented the first fee, preparer tax identification number, in September 2010. IRS plans to implement the remaining two user fees for competency testing and fingerprinting requirements in fiscal year 2012. program will be funded exclusively through collections from these three fees. According to IRS officials, the return preparer user fee IRS faces unique challenges in establishing user fees. For example, the National Taxpayer Advocate has noted that user fees may prevent the public from accessing IRS’s services because they cannot afford to pay the fee or may discourage people from seeking services that help IRS fulfill its core mission. User fees may also be costly for IRS to administer and divert resources from activities that bring in more tax revenue. On the other hand, user fees can promote an efficient and fair allocation of government resources and ensure that those who are receiving special benefits pay for them. GAO-11-336 discussed IRS’s plans for implementing user fees for preparer tax administration number registration, competency testing, and continuing education. Since then, IRS has decided not to establish a fee for continuing education and has decided to establish a user fee for fingerprinting preparers as part of a background check. assist staff in identifying potential new user fees. However, IRS may not be taking full advantage of its process for identifying new user fees. For example, officials in some divisions or offices said that they had no formal process for soliciting suggestions from employees on new user fee proposals. Although the CFO’s biennial review memo refers managers to the Internal Revenue Manual when reviewing costs of existing user fees, it does not specifically indicate that these same guidelines or other established design guidelines should be considered when examining new user fees opportunities. We were unable to determine the extent to which IRS staff considered these guidelines, and in our survey of division commissioners, we identified variation in the criteria considered across the operating divisions and offices. If managers are unclear about the process for soliciting user fee proposals or do not receive clear direction on what guidelines to consider when evaluating their programs for new user fees, IRS may be less able to fully engage its staff in evaluating new user fees. As a result, IRS could be missing potential fee opportunities or misallocating resources toward developing fee proposals that are not efficient, equitable, cost-effective, or feasible. The cost and effort involved to clarify its process for soliciting new user fee proposals and to refer staff to established guidelines for identifying and evaluating potential fee opportunities would likely be modest to IRS, since, as table 7 shows, several sources exist for such guidelines. Because of the volume of IRS’s user fee transactions with the public, ensuring that user fee rates are properly set, collected, and reviewed is important for taxpayers. As a result of its biennial reviews of user fees, IRS has focused on improving cost estimates related to its most significant fees in recent years. However, we that found that (1) some fees were omitted from the biennial review, (2) assumptions used in cost estimates were not always well documented, and (3) factors considered in setting fees and decisions made were not always well documented. It is also unclear the extent to which user fee design guidelines were used to assess new fee opportunities. Given that most of these issues pertained to IRS’s smaller fee programs, steps taken to strengthen the review process should not only improve the efficiency of the review but also be cost-effective. Accordingly, we have identified some additional steps, such as clarifying the review guidelines up front, which could potentially decrease the amount of rework needed downstream and be implemented at low cost. We recommend that the Commissioner of Internal Revenue take the following five actions to: Include certain fees in the biennial review that were previously omitted. Develop supplemental guidance or tools that provide more detailed information on data and assumptions to include in cost estimates. Better document the factors considered in setting and reviewing existing user fees. Better document the decisions made in setting and reviewing existing user fees. Provide clear, specific, and direct guidelines for IRS employees and managers to follow in identifying potential fee opportunities. We provided a draft of this report to the Commissioner of Internal Revenue for his review and comment. In written comments, reprinted in appendix III, the Deputy Commissioner for Operations Support agreed with our recommendations and indicated that IRS plans to implement them during its fiscal year 2013 biennial review of user fees. We are sending copies of this report to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We are also sending copies to the Commissioner of Internal Revenue, the Secretary of the Treasury, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions or wish to discuss the material in this report further, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals making key contributions to this report are listed in appendix IV. The objectives of this report were to (1) describe the types and amounts of user fees collected and how the Internal Revenue Service (IRS) collects and uses fees, (2) assess how IRS sets and reviews existing user fees, and (3) assess how IRS identifies additional areas where new user fees could be justified. To meet these objectives, we reviewed user fee legislation and guidance, agency and budget documents, literature on user fee design and implementation characteristics, and cost estimates for user fees that were included in IRS’s biennial review process. We primarily focused on user fees that were included in IRS’s fiscal year 2009 biennial review. We also obtained information about IRS’s internal controls for the revenue and volume data we used and determined that the data were sufficiently reliable for the purposes of this report. Specifically, we relied on our audit of the aggregated year-end user fee balances and transaction testing for the installment agreement user fee revenue amount, which accounts for the majority of user fee collections and found that the revenue data were sufficiently reliable for our descriptive purposes. Further, we performed a separate analysis to assess the reliability of the user fee volume data by (1) conducting a correlation analysis between the revenue and volume data and (2) using the revenue amount and fee rate to calculate the volume. Based on our analyses, we determined that the volume data were sufficiently reliable for our descriptive purposes. In addition, we interviewed officials from IRS’s Office of the Chief Financial Officer, Office of the Chief Counsel, Large Business and International Division, Office of Professional Responsibility, Return Preparer Office, Small Business and Self-Employed Division, Tax Exempt and Government Entities Division, and Wage and Investment Division, who were responsible for overseeing the user fee programs, and the Taxpayer Advocate Service. We also obtained written responses from the deputy commissioner of each of the user fee programs we selected. Finally, we reviewed prior GAO work on user fees and cost estimation and other relevant literature. We conducted this performance audit from February 2011 through November 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Chief Counsel letter rulings and determination letters: IRS’s Chief Counsel issues letter rulings and determination letters for a variety of requests, including but not limited to, (1) advanced pricing agreements, (2) private letter rulings, (3) change in accounting method, and (4) change in accounting period. User fees range from $150 to $14,000. IRS publishes updated guidance on obtaining Chief Counsel letter rulings and determination letters and fee amounts in the first Revenue Procedure each year. Competent authority limitation on benefits determination letter: IRS charges a user fee of $15,000 for taxpayers who request a discretionary determination under a limitation on benefits provision. Revenue Procedure 2006-54 provides guidance on requesting a competent authority limitation on benefits determination letter. Enrolled actuary: An enrolled actuary is an individual who has met certain standards and qualifications set forth by the Joint Board for the Enrollment of Actuaries and has also been approved to perform actuarial services required under the Employee Retirement Income Security Act of 1974. IRS charges a user fee of $250 for the initial enrollment and for the renewal, which occurs on a triennial basis. Implementing regulations for the enrolled actuary fee are at 26 C.F.R. §§ 300.7 and 300.8 and 20 C.F.R. § 901.11. Enrolled agent/enrolled retirement plan agent: In order to become enrolled to practice before IRS, an individual must submit an application and meet certain standards, pay a user fee of $30 for the initial enrollment and for renewal every 3 years, and pass the special enrollment examination that is administered by IRS. Similar to attorneys and certified public accountants, enrolled agents and enrolled retirement plan agents are unrestricted as to which taxpayers they can represent and types of tax matters they can handle. Implementing regulations for the enrolled agents and enrolled retirement plan agents fee are at 26 C.F.R. §§ 300.5 and 300.6. Foreign insurance excise tax (FET) exemption: FET is imposed on premiums paid to the foreign insurer. Applicants (which are foreign insurance companies) may apply for a FET closing agreement with IRS under Revenue Procedure 2003-78 certifying that they meet the residency and limitation of benefits articles of the applicable income tax treaty. Through this agreement, the foreign insurer (or applicant) assumes the responsibility of paying the tax, if any, that otherwise would be paid by the domestic insured party. Applicants requesting a FET exemption must pay a $4,000 fee. Income verification express service (IVES): This service provides 2- business-day processing and electronic delivery of tax return transcripts for users, such as mortgage lenders and other financial market entities, to confirm the income of a borrower (or taxpayer) during the processing of a loan application. IRS charges a user fee of $2.00 for each IVES transcript request. Installment agreement (IA): An IA is a payment option or payment plan that allows a taxpayer to pay his or her full tax liability with smaller monthly payments over a period of time for up to 60 months. Interest and any applicable penalties continue to accrue until the balance is paid in full. To secure an IA, applicants must submit a completed application and pay the applicable user fee. There are three levels of fees applicable to new IAs: (1) $105 for most taxpayers, (2) $52 for taxpayers entering into direct debit agreements, and (3) $43 for taxpayers who qualify for the low- income rate (i.e., taxpayers whose incomes falls below 250 percent of the poverty level). Taxpayers who request reinstatement of an IA must pay a fee of $45. Implementing regulations for the IA fee are at 26 C.F.R. §§ 300.1 and 300.2. Offer in compromise (OIC): An OIC is an agreement between IRS and a taxpayer to settle a tax liability for payment of less than the full amount owed. When submitting an offer for consideration, taxpayers must make a partial payment unless a waiver applies.amount in a lump sum, in which 20 percent of the offer amount is due when submitting the offer, or in installment payments, in which case the first installment payment is due with the offer (unless the taxpayers qualify for low-income waiver, in which case the 20 percent or installment payment is not applicable). If IRS accepts the offer amount, the user fee Taxpayers may pay the offer is applied toward the remaining balance due.for the OIC fee are at 26 C.F.R. § 300.3. Pre-filing agreement (PFA): Taxpayers are subject to the $50,000 user fee only if they are selected to participate in the PFA program. If taxpayers are selected, IRS will examine specific issues relating to their tax returns before they are filed. The objective of the PFA program is to resolve issues that are likely to be disputed in post-filing audits. Revenue Procedure 2009-14 provides guidance on the PFA program. Preparer tax identification number (PTIN): All paid tax return preparers, including attorneys, certified public accountants, and enrolled agents, are required to apply for a PTIN before preparing any federal tax returns in 2011 and thereafter. Individuals who apply for a PTIN must pay an annual fee of $64.25. Of that amount, IRS collects $50 for associated PTIN activities and the remaining $14.25 is transferred to a third-party vendor responsible for maintaining the registration system. Implementing regulations for the PTIN fee are at 26 C.F.R. § 300.9. Special enrollment examination (SEE): Prior to enrolling as an enrolled agent or enrolled retirement plan agent, individuals must either pass a three-part exam or possess years of past IRS service and technical experience specified in Circular 230 to practice before IRS. IRS charges $11 per part to oversee the examination. Implementing regulations for the SEE fee are at 26 C.F.R. § 300.4. Tax Exempt and Government Entities letter rulings and determination letters: The Tax Exempt and Government Entities Division issues letter rulings and determination letters for a variety of activities, including but not limited to employee plans and exempt organizations. Depending on the request, fees can range from $200 to $25,000. U.S. residency certification: Income tax treaties between the United States and foreign countries can reduce withholding rates for certain types of income paid to U.S. residents. In such cases, U.S. treaty partners may require taxpayers to provide a letter of U.S. residency certification for purposes of claiming benefits under an income tax treaty or exemption from a value-added tax imposed by the foreign country. Applicants requesting such certifications would need to submit a $35 fee for the first 20 certifications and $5 for each additional request of 20 certifications thereafter to IRS. In addition to the contact named above, Jay McTigue (Assistant Director), Elizabeth Fan, and Susan Sato made key contributions to this report. Doreen Eng, Edward Nannenhorn, Jacqueline Nowicki, Chelsa Gurkin, Melanie Papasian, Neil Pinney, and Andrew J. Stephens also provided valuable assistance.
The President's fiscal year 2012 budget proposal requests $13.6 billion to fund the Internal Revenue Service (IRS), including $204 million in spending funded through user fee collections. Well-designed and well-implemented user fees can reduce taxpayer burden by funding portions of IRS services that provide special benefits to users beyond what is normally provided to the public. As such, GAO was asked to (1) describe the types and amounts of IRS user fees and how IRS collects and uses them, (2) assess how IRS sets and reviews existing user fees, and (3) assess how IRS identifies additional areas where new fees could be justified. GAO reviewed relevant laws, guidance, and literature on user fee design and implementation. GAO reviewed IRS documents and cost estimates and interviewed IRS officials in the Chief Financial Officer's (CFO) office and program divisions. Although user fee collections fund less than 2 percent of IRS's budget, fee collections are expected to reach $309 million in fiscal year 2012 and recently involved nearly 20 million transactions with taxpayers. IRS charges user fees for various activities that include assisting taxpayers in complying with their tax liabilities, clarifying the application of the tax code to particular circumstances, and ensuring the quality of paid preparers of tax returns, among others. In fiscal year 2010, two fees accounted for more than 80 percent of total retained user fee collections: (1) Installment agreements (IA): Provides taxpayers who cannot pay their full tax liability the option to pay their full tax liability with smaller monthly payments over a period of time for up to 60 months. This service is offered to most taxpayers for $105 with lower rates for low-income taxpayers and those who opt for a direct debit agreement. This service generated $198 million, or 68 percent of IRS's total retained user fee collections. (2) Income verification express services (IVES): Provides 2-business-day processing and electronic delivery of tax return transcripts for users, such as mortgage lenders and other financial market entities, to confirm the income of a borrower (taxpayer) during the processing of a loan application. IRS charged a fee of $2.25 for each IVES transcript request. This service generated $40 million, or 14 percent of total retained fee collections. IRS conducts a review of its user fees biennially (or every 2 years) and has taken steps to improve its estimates of the cost of providing its user fee services, with particular attention to its largest fee. For example, IRS hired a contractor to evaluate, update, and simplify the IA user fee cost estimate. However, for a few of IRS's smaller fee programs, GAO found that IRS omitted fees from its biennial review, did not clearly document assumptions to be used in some cost estimates, and lacked documentation of factors considered in setting some fees. For example, one user fee has not been reviewed or updated since fiscal year 1996, while program managers for another user fee were uncertain about salary assumptions. GAO also found that while officials stated that they consider factors other than cost (such as potential effects on taxpayer compliance and administrative burden) in setting fee rates, they did not thoroughly document these factors or corroborate anecdotal support with analysis. Finally, GAO found that IRS did not fully document final decisions made on fee rates as a result of its biennial review. IRS has implemented several new user fees in recent years, but it may not be taking full advantage of its process for identifying new user fees. As directed by Office of Management and Budget Circular A-25, IRS's CFO requests that each division review its programs and provide proposals for new user fees on a biennial basis. However, officials in some divisions or offices said that they had no formal solicitation process for employees to suggest new user fee proposals. Further, IRS does not clearly refer staff to consider established guidelines identified in the Internal Revenue Manual or other resources when identifying potential new user fees during its biennial review. GAO was unable to determine the extent to which IRS staff considered these guidelines. GAO recommends that the Commissioner of Internal Revenue take steps to include certain additional fees in IRS's biennial review, improve guidance on estimating costs of user fees, and improve the documentation of assumptions used, factors considered, and decisions made during the setting and reviewing of existing fees. In addition, steps should be taken to provide clear, specific, and direct guidelines for IRS employees and managers to follow in identifying potential new fee opportunities. In written comments, IRS agreed with our recommendations.
The U.S. Census Bureau performs large surveys and censuses that provide statistics about the American people and the U.S. economy. The business activities of the bureau can be divided into four categories: decennial and other periodic census programs, demographic programs, economic programs, and reimbursable work programs that are conducted mainly for other federal agencies. During fiscal year 2000, the bureau conducted the actual decennial count of U.S. population and housing as of April 1, 2000, which is its largest and most complex activity. The results of the 2000 decennial census are used to apportion seats in the U.S. House of Representatives, draw congressional and state legislative districts, and form the basis for the distribution of an estimated $200 billion annually of federal program funds over the next decade to state and local governments. The bureau receives two appropriations from the Congress: (1) salaries and expenses and (2) periodic censuses and programs. The salaries and expenses appropriation provides 1-year funding for a broad range of economic, demographic, and social statistics. The periodic censuses and programs appropriation provides no-year funding to plan, conduct, and analyze the decennial censuses every decade and other authorized periodic activities. The 2000 decennial census covers a 13-year period of effort from fiscal years 1991 through 2003 at an estimated cost of $6.5 billion. The bureau prepared its fiscal year 2000 budget request for the 2000 decennial census in eight broad “frameworks” of effort that were submitted to the Office of Management and Budget (OMB) and the Congress. As part of the Department of Commerce and Related Agencies Appropriation Act, 2000, the appropriation for Periodic Censuses and Programs earmarked $4.476 billion by framework for the bureau and the Census Monitoring Board to remain available until expended. While these amounts were earmarked for the frameworks, section 205 of the act authorized the Department of Commerce to transfer amounts not to exceed 5 percent of any appropriation to another appropriation, but no appropriation could be increased by more than 10 percent. Section 205 also required the Department of Commerce to comply with the procedures set forth for reprogramming under section 605 of the act. For management, program, financial, staffing, and performance purposes, the 8 frameworks were further divided by the bureau into 23 activities and within these activities, further divided into 119 projects. Bureau financial management reports provided appropriated amounts, expended and obligated amounts, and variances to a project level. The President’s fiscal year 2000 budget, which was submitted by OMB to the Congress on February 1, 1999, included nearly $2.8 billion for the bureau to perform the 2000 decennial census. This original budget request reflected the bureau’s plan to gather information based in part on statistical estimation for nonresponding households and to adjust undercounting and other coverage errors. However, only a few days earlier on January 25, 1999, the Supreme Court, in Department of Commerce v. U.S. House of Representatives (525 U.S. 316), held that the Census Act prohibited the bureau from using statistical sampling for purposes of congressional apportionment. Because the original budget request was submitted with the plans for statistical sampling, the bureau had to amend its plans in accordance with the Supreme Court’s decision. According to the bureau, the effect of this decision was that an estimated 12 million additional nonresponding households would require visits by enumerators. In addition, various programs designed to address undercounting and other coverage errors would have to be expanded. The bureau also planned to visit about 4 million additional addresses for which the Postal Service had returned the questionnaires because it believed the housing units to be vacant or nonexistent. In light of the need for additional enumeration and other programs to ensure accuracy, OMB requested and the Congress approved additional funding of $1.7 billion to the bureau’s original budget request of $2.8 billion. The bureau’s total fiscal year 2000 appropriation of about $4.5 billion for the 2000 decennial census was within the periodic censuses and programs account. The preparation of annual financial statements by federal agencies and their subsequent audit is intended to provide for complete, reliable, timely, and consistent financial information for use by agency management and the Congress in the financing, management, and evaluation of federal programs. The Department of Commerce is required to prepare annual consolidated financial statements and have them audited under the mandate of the Chief Financial Officers Act of 1990, as expanded by the Government Management Reform Act of 1994. The Commerce IG is responsible for the financial audit but may use its contract authority to hire an independent accounting firm to perform the audit. Commerce consists of 13 bureaus, including the U.S. Census Bureau. The Commerce IG contracted with several certified public accounting firms to conduct financial audits of the various bureaus for fiscal year 2000. OMB provides implementing guidance for agencies, including guidance on preparing and auditing financial statements. This guidance requires that financial statements be prepared in accordance with U.S. generally accepted accounting principles (GAAP) and the audit of such financial statements must be in accordance with generally accepted government auditing standards (GAGAS). GAGAS require the auditor to obtain an understanding of internal controls to plan the audit and determine the nature, timing, and extent of tests to be performed and to report deficiencies considered to be reportable conditions as defined in the auditing standards. Some reportable conditions are material weaknessesand all reportable conditions are presented in a report on internal controls. Lesser internal control or operational matters are usually reported in separate management letters. Criteria for internal controls are contained in GAO’s Standards for Internal Control in the Federal Government. The Federal Financial Management Improvement Act of 1996 (FFMIA) requires that agencies implement and maintain financial management systems that substantially comply with federal financial management systems requirements, applicable GAAP, and the U.S. Standard General Ledger (SGL) at the transaction level. As part of the annual financial statement audit, FFMIA requires the auditors to report any instances in which they noted that the agency’s financial management systems did not substantially comply with FFMIA requirements. Of the $4.5 billion appropriated to the U.S. Census Bureau for fiscal year 2000, lower expenditures and obligations than planned resulted in available balances of at least $415 million. These no-year funds remain available until expended, and the Department of Commerce has the authority to transfer amounts to other programs. By April 2001, $360 million of this amount was made available for fiscal year 2001 bureau programs. We identified the remaining $55 million that represented amounts obligated for contracts for which activity had been completed. These funds potentially can be deobligated and made available for other programs. Additional amounts may be identified for potential deobligation as the bureau examines about $90 million of undelivered order balances under $1 million. On September 27, 2000, the bureau reported to the Congress that it had “at least $305 million of budget savings” out of its $4.5 billion fiscal year 2000 appropriation for the 2000 decennial census. Based in part upon our discussions with the Chairman and staff of the Subcommittee on the Census, House Committee on Government Reform, and staff of the Senate and House Committees on Appropriations, and as authorized by law, unobligated fiscal year 2000 appropriations were made available for other census programs. By April 2001, $360 million of unobligated fiscal year 2000 appropriations were made available for fiscal year 2001 bureau programs in two phases. In December 2000, based upon information the bureau provided to the House Committee on Appropriations, $300 million of unobligated balances from prior years were used to offset the amount needed for the bureau’s fiscal year 2001 appropriation. From this amount, $260 million was used to fund the decennial census program and $40 million was used to fund other periodic census programs for fiscal year 2001. On March 27, 2001, the bureau identified an additional $60 million of unobligated balances from fiscal year 2000 funds. On April 11, 2001, $56 million was used to fund the decennial census program and $4 million was used to fund other periodic census programs for fiscal year 2001. From our test of all bureau undelivered order balances over $1 million, which totaled $367 million as of September 30, 2000, we identified potential deobligations of $55 million. This resulted from contracts for which activity had been completed and obligated balances were not needed to close out the contracts. Thus, amounts can be made available for other purposes. An agency should identify and deobligate funds no longer needed after the agency is certain no related costs remain. Table 1 shows the contractor, the applicable framework number within the bureau’s eight frameworks of effort, and the potential amount available for deobligation. The contracts we identified as of September 30, 2000, were multiple phase or task contracts with future completion dates stretching to December 31, 2003. Bureau officials stated that its Finance Division conducted a quarterly review of contracts for deobligation. We found these reviews to be ineffective because we noted several contracts for which the bureau was not promptly deobligating amounts associated with completed contracts. In our view, some contracts should have been deobligated as of September 30, 2000, while other contracts were completed and amounts were available for potential deobligation by the time we completed our review in June 2001. Specifically, we found the following. Lockheed Martin provided services in three phases that included (1) the architectural design, testing, and implementation of a data capture system, (2) deployment of this system to data capture centers, and (3) preparation of images for transfer to long-term storage. Services for phase one were completed on September 30, 1998, phase two by February 28, 2001, and phase three is to be completed by December 31, 2003. Most of the remaining funding of $16 million related to phase two was no longer needed and was available for potential deobligation by the time we completed our review in June 2001. TRW Incorporated provided services to design and staff the data capture centers. The last of these services were completed in June 2001, and sufficient amounts were obligated to close out the contract. Remaining obligated amounts of $13 million were no longer needed and were available for potential deobligation by June 30, 2001. The General Services Administration (GSA) rented building space and provided other services. One GSA communication service agreement for over $1 million for fiscal year 2000 was never performed under the planned project. Also, about $5 million obligated for rent and another $6 million obligated for services were not needed after the contract’s period of performance ended on December 31, 1999. This resulted in total funding of $12 million that was no longer needed for the purpose intended. Since the period of performance had ended and there was no further billing on these contracts after September 30, 2000, we believe that this $12 million should have been deobligated as of September 30, 2000. Young & Rubicam provided advertising to promote the 2000 census and encourage people to complete and return the census forms and cooperate with enumerator follow-up. The contract also required a follow-up study on the effect of the advertising that was completed by February 2001. About $7 million remained obligated for the advertising campaign that was no longer needed after May 31, 2000. We believe that this amount should have been deobligated as of September 30, 2000. Electronic Data Systems Corporation staffed the telephone questionnaire assistance center and provided other services with a contract through May 31, 2001. The assistance center was shut down on August 13, 2000. Other services were completed by May 31, 2001, and sufficient amounts remained obligated to close out the contract. The remaining $6 million was available for potential deobligation as of May 31, 2001. UNISYS Corporation provided telecommunications equipment and other services for the local census offices with a contract through October 31, 2000. The last of these offices was closed down in early November 2000, and sufficient amounts remained obligated to close out the contracts. We believe that the remaining unused contract funding of $1 million could have been deobligated as of September 30, 2000, during the subsequent year-end closing process. Additional amounts may be identified for deobligation as the bureau closes out about $90 million of undelivered order balances under $1 million that we did not review. Historically, workload and enumerator productivity have been two of the largest drivers of census costs, and the bureau developed its budget for the 2000 decennial census using a model that contained key assumptions about these two variables. The largest cause of the fiscal year 2000 available balances was a lower support staff workload than planned. This resulted in about $348 million of lower salary and benefit costs for over 11,000 fewer support staff than planned. The lower support workload also reduced infrastructure costs for temporary office space rental, equipment and supply costs, and contractual services that resulted in further available balances of $167 million. However, these available balances were partially offset by about $100 million of higher salary and benefit costs than planned for enumerators, including a higher workload for unanticipated recounts. The U.S. Census Bureau prepared its fiscal year 2000 plan for the 2000 decennial census in eight frameworks of effort as shown in table 2. As indicated above, framework 3, Field Data Collection and Support Systems, was the largest effort of the 2000 census, amounting to about 75 percent of both appropriated funds and expended and obligated funds and accounting for 82 percent of total budget variances. Significant reasons for the larger budget variances are discussed in the following sections. A lower support staff workload than planned in framework 3 resulted in a significant budget variance of about $348 million in salary and benefit costs as follows. The bureau recognized $309 million for lower local and regional census office support staff salaries and benefits. The bureau planned about 30,167 full-time equivalent (FTE) support staff for 520 local and 12 regional census offices. However, according to bureau records, only 18,787 FTEs or about 62 percent were actually employed, resulting in over 11,000 fewer FTEs than planned. The bureau recognized $39 million primarily for lower staff salaries and benefit costs from an activity known as Accuracy and Coverage Evaluation (ACE). ACE interviewed more people than planned by telephone, thus reducing the need for household visits by enumerators. Another $167 million of budget variances were primarily due to lower infrastructure costs and other costs. This included temporary office space rental, local travel reimbursement, equipment and supply costs, and contractual services that were contained mostly in frameworks 3 and 5. Projects with significant budget variances—over $10 million—included the following. The variance for data capture systems, building rent, advertising, telecommunications, and other contractual services was about $55 million. This amount in table 1 was previously discussed as undelivered orders available for potential deobligation. The variance for regional and local census offices in framework 3 was about $36 million. This was the result of lower support staffing levels and resulted in lower temporary office space and equipment costs than planned. The variance for telecommunications in framework 3 was about $31 million. This was due to renegotiations of a 90-percent reduction in long-distance phone costs from a planned 10 cents a minute to an actual 1 cent a minute. The variance for National Processing Center (NPC) data capture operations in framework 5 was about $22 million. This was because there were fewer forms to process than planned for, and there was higher productivity for data entry. About 5 million fewer mail back, enumerator, and group quarters forms were processed, and data entry productivity of 6,500 keystrokes per hour was higher than 5,200 keystrokes per hour planned. The variance for telephone questionnaire assistance in framework 3 was about $14 million. This was due to lower contract costs in running the program because inbound calls of 6 million were 45 percent lower than the 11 million calls planned. Enumerator workload is largely determined by the initial mail response rate for returned census questionnaires. By April 27, 2000, when the bureau prepared its assignment lists for the start of follow-up operations, the response rate was 64 percent—3 percent higher than the 61-percent rate estimated by the bureau. Considering that a 1-percent change in the response rate represents about 1.2 million households, the 3-percent difference was significant since over 3 million households would not require follow-up visits by enumerators. Although the additional forms were received after the start of nonresponse follow-up, they served as a cross-check to information gathered by enumerators. The bureau attributed the higher response rate, in part, to a professional advertising campaign that it conducted under framework 8, which urged people to return the questionnaires and cooperate with census enumerators. However, budget variances from the higher mail response rate and the lower support staff workload were partially offset by over $100 million of higher salary and benefit costs than planned for enumerators in framework 3, including a higher workload for unanticipated recounts. Enumerator efforts and costs for fiscal year 2000 are presented in table 3. Enumerator Visits. Enumerators visited almost 42 million American households during the 10-week nonresponse follow-up period that ended July 2, 2000. This effort resulted in about $87 million of higher enumerator salaries and benefits in framework 3 than planned. This occurred because the bureau was concerned about high staff turnover and having a sufficient number of enumerators in a tight labor market to complete nonresponse follow-up in a shortened 10-week period compared to the 14- week period for the 1990 census. To address this issue, the bureau adopted a front-loaded staffing strategy, which resulted in the bureau hiring and training more temporary personnel up front to reduce the 150- percent staff turnover estimated for the 2000 census. This staffing strategy increased salary and benefit costs of temporary personnel, as well as training, quality assurance, and supervisory staffing costs. Also, the bureau hired almost 3,300 more FTE enumerators and paid them almost $1 an hour more than the average $11.77 an hour that was planned. In addition, enumerator workload increased due to unanticipated recounts of almost 800,000 incomplete, lost, or inaccurate questionnaires as follows. Over 600,000 returned questionnaires were incomplete because they did not indicate the number of persons living in the household, and the households had to be visited. About 122,000 questionnaires were lost between completion by local census offices and processing by the data capture centers, and enumerators had to revisit each household and complete another questionnaire. About 68,000 questionnaires were recounted in three Florida local census offices because serious errors and irregularities were detected, including enumerator falsification of census data. In September 2000, the Commerce IG conducted an investigation and issued a report indicating that the offices had not followed proper procedures and quality controls, and concurred with the bureau’s response to recount the areas in question. Enumerator Assignment, Control, and Coverage Improvement. This effort resulted in about $99 million of higher salary and benefit costs in framework 3 for nonresponse follow-up than planned. This included assigning and scheduling enumerators, preparing information packages for household visits, and verifying addresses. These higher costs included the reprocessing of the almost 800,000 incomplete, lost, and inaccurate questionnaires that were recounted by enumerators, as well as an unanticipated effort to verify 1.5 million new addresses as vacant. Other Enumerator Workload. This effort offset the above budget variances by about $86 million in framework 3 due primarily to the following. Service-based enumeration was conducted in soup kitchens, homeless shelters, and areas frequented by persons with no fixed addresses. This effort resulted in about $32 million of budget variances. According to the bureau, about 69,000 sites were estimated for this project based upon national and local data. However, only about 14,000 actual sites, or about 20 percent of the planned workload, were actually identified and counted. According to the bureau, this occurred because the majority of sites estimated did not exist based upon enumerator visits. Group quarters enumeration was conducted in places like prisons, nursing homes, military barracks, and school dormitories. This effort resulted in about $31 million of budget variances. According to the bureau, about 519,000 units were estimated for this project. However, only about 172,000 actual units, or one-third of the expected amount, were actually identified and counted. According to the bureau, this occurred because addresses for estimated sites erroneously included businesses, commercial establishments, and duplications, or simply did not exist. List enumeration was conducted in areas where households do not receive direct mail delivery, such as rural areas where mail is sent to a local post office box. This effort resulted in about $8 million of budget variances. According to the bureau, about 500,000 housing units were estimated for this project. However, only about 372,000 actual units, or about 75 percent of the planned workload, were actually identified and counted. According to the bureau, this occurred due to inaccurate address lists. Further details and explanations for 27 project budget variances with a minimum threshold of $5 million or more are presented in appendix II. These 27 variances represented over 90 percent of the budget variance of $415 million for fiscal year 2000. Historically, enumerator productivity has been a factor affecting decennial census costs. Although the bureau has been trying to improve productivity, data from past decennial censuses has been largely unavailable, incomplete, or not comparable. According to recent bureau data, enumerator productivity did not have a significant impact on budget variances for the 2000 decennial census because the actual national average time to visit a household and complete a census questionnaire was about the 1 hour estimated by the bureau. Information on enumerator productivity rates by type of local census office, the bureau’s methodology for refining the productivity data, and lessons learned to assist the planning effort for the 2010 census is presented in a recent GAO report. For fiscal year 2000, the U.S. Census Bureau had significant internal control weaknesses that resulted in an inability to develop and report complete, accurate, and timely information for management decision- making. This was due to specific internal control weaknesses as well as the bureau’s overall internal control environment being assessed as high risk by its independent auditor. The bureau’s control environment was characterized by human capital weaknesses, including the lack of experienced accounting staff, which contributed to heavy reliance upon contractors. In addition, management oversight was not sufficient to ensure adherence to established policies and procedures, which created opportunities for inconsistencies and errors, particularly in year-end closing procedures and financial statement preparation. The specific control weaknesses for fiscal year 2000 were related to the lack of controls over financial reporting and financial management systems. Financial reporting issues included (1) the inability to produce accurate and timely financial statements and other financial management reports needed for oversight and day-to-day management, (2) the lack of timely and complete reconciliations needed to validate the balances of key accounts, and (3) unsupported and inaccurate reported balances for accounts payable and undelivered orders, two key accounts needed to manage and report on unliquidated obligations. For financial management systems, the bureau has experienced persistent financial management systems problems for many years and has candidly acknowledged the material nonconformance of its financial systems in annual reports required by FMFIA. Despite these reports, the bureau has asserted in its fiscal years 1999 and 2000 financial statement reports that its financial management systems were in substantial compliance with the provisions of FFMIA. In our view, the bureau’s financial management systems did not substantially comply with the requirements of FFMIA as of September 30, 2000. Internal controls are a major part of managing an organization to meet mission goals, support performance measures, and safeguard assets. GAO’s Standards for Internal Control in the Federal Government emphasize that a positive internal control environment provides discipline, structure, and a climate that forms a foundation for effective internal controls. We concurred with an assessment of the bureau’s overall internal control environment by its auditor as high risk for fiscal year 2000. This assessment was contained in the auditor’s work papers, which cited a human capital issue in the bureau’s lack of experienced accounting staff, heavy reliance upon contractors, and insufficient management oversight and review. The auditor reported and we observed during our work that the bureau did not have a sufficient number of experienced accountants familiar with the financial statement process; relied extensively on contractors to reconcile its accounts, close its books after millions of dollars of year-end adjustments, and prepare its annual financial statements and related disclosures; and did not ensure that account reconciliations were properly reviewed by bureau management. Further, the auditor noted that the bureau’s policies and procedures were not consistently adhered to, which created opportunities for inconsistencies and errors, particularly in year-end closing procedures and financial statement preparation. During our work, we concurred with the auditor’s observations and noted that the overall control environment was impaired by the lack of management oversight to ensure that policies and procedures were followed. We also found that the bureau did not have an internal review function designed to assist management in identifying areas where adherence to policies and procedures could be improved. For example, the bureau established a written policy for determining proper cutoff at year-end to identify accounts payable and other liabilities when goods or services had been delivered but not yet paid. This is necessary to fairly present account balances in accordance with GAAP. However, we found that bureau accounting personnel frequently did not adhere to the policy and used the date of the invoice to determine the appropriate accounting period rather than the date that goods, and particularly services, were delivered. Accounting personnel used the wrong date even though most invoices we examined for services clearly indicated the date that services were actually received. Given the overall weaknesses in the bureau’s control environment, it is not surprising that the independent auditors and we identified significant control weaknesses in two broad categories: (1) financial reporting and (2) financial management systems. Weaknesses in the bureau’s financial reporting affect the internal and external reports produced by the bureau for both oversight and day-to-day management. These problems relate to the bureau’s ability to produce timely and accurate financial statements and other financial reports, perform reconciliations to validate accounts, and report accurate amounts for key accounts including accounts payable and undelivered orders. Difficulties in Preparing Financial Statements and Other Reports. The bureau stated in its 1999 and 2000 FMFIA reports that it did not have adequate procedures in place to produce timely or accurate financial statements and related performance data. This weakness seriously affected the bureau’s ability to conform with GAAP. In these reports, the bureau also acknowledged a need for a proper analysis of its financial reports to ensure the completeness of disclosures and the adequacy of the presentation of financial information. A contributing factor to this weakness is the human capital issue previously discussed as part of the control environment coupled with systems weaknesses as presented in the next section. The bureau’s auditor noted the following conditions with which we concur. The bureau continues to experience significant difficulties and delays in producing complete and accurate financial statements. The auditor reported this condition as a material weakness in its internal control reports on the bureau for fiscal years 1999 and 2000. Numerous technical and clerical errors were found, including inconsistencies in the form and content of the financial statements and related notes. Because of these difficulties, the bureau’s financial statements were manually compiled instead of being generated directly from its financial management systems. Compounding the potential for error, adjustments to the financial statements must be posted manually and each adjustment must be crosswalked to the financial statements instead of being posted directly by the financial management systems. For example, the bureau did not post a fourth quarter 2000 entry to deferred revenue and accounts receivable balances, which resulted in a material overstatement of these accounts in its draft financial statements. Further, the bureau was unable to use its financial management system to produce required Treasury reports such as the Statement of Transactions (SF-224) and Report on Budget Execution and Budgetary Resources (SF- 133). The bureau prepared these reports manually, which required additional time and effort to meet due dates and increased opportunities for error. Ineffective Reconciliations. The lack of adequate account reconciliation seriously affected the ability of the bureau to prepare timely and accurate financial statements at year-end. The bureau stated in its 1999 and 2000 FMFIA reports that it did not promptly reconcile its financial information system with its subsidiary records. The bureau also reported that when reconciliations were performed, it did not sufficiently document and account for all reconciling items. According to the bureau, critical reports required to prepare the financial statements from the financial management systems were not fully developed as part of the standard reporting package. In addition, the bureau’s auditor reported and we concur with the following findings. Many key financial statement balances in the general ledger were not reconciled by the bureau to supporting subsidiary records in a timely manner through the first 6 months of fiscal year 2000. The auditor reported this condition as a material weakness in its internal control reports on the bureau for fiscal years 1999 and 2000. Bureau accountants who performed reconciliations were unfamiliar with the nature and details of the accounts they were reconciling and the reconciliations were not adequately supported. The auditor noted that for the last half of fiscal year 2000, the bureau resorted to hiring contractors to perform many of the monthly reconciliations. This resulted in most balances being adequately supported and reviewed by year-end. Deferred revenue and accounts receivable, however, were not properly reconciled throughout the year. The bureau’s Division of Finance did not reconcile financial transactions with information from program offices, and folders by reimbursable project did not contain adequate information on amounts collected from customers or accumulation of costs for projects. Without proper and timely accounting, collections are subject to increased risk of loss or theft. The bureau did not update cash balance reports throughout the fiscal year and did not bill a large amount of work to customers in a timely manner, impeding timely collection efforts. Additionally, accounts receivable from the public for economic and demographic data were not aged properly and the bureau did not establish an allowance for uncollectable accounts until after year-end in order to fairly present statements in accordance with GAAP. The bureau also did not reconcile its intragovernmental balances with its trading partner agencies other than the Department of Commerce, at least annually, to comply with the provisions of OMB Bulletin 97-01. The bureau experienced difficulties in producing intragovernmental account information and, as a result, reports had to be prepared manually, were incomplete, contained errors, and were submitted late. In addition, the bureau’s core financial management system was not designed to identify separately amounts that should be eliminated for intragovernmental purposes in accordance with GAAP. Accounts Payable and Undelivered Order Weaknesses and Errors. As part of analysis of bureau budget variances discussed earlier in this report, we focused on the timely and accurate reporting of these two accounts. Controls over unliquidated obligations, which include accounts payable and undelivered orders, are an important part of the structured process needed to reconcile and deobligate funds in a timely manner. The bureau disclosed in its 1999 and 2000 FMFIA reports that it lacked adequate support for its accounts payable and undelivered orders balances. Further, both the bureau and its auditor have reported significant weaknesses in account reconciliation, as previously discussed, including accounts payable and undelivered orders. For example, we noted that the bureau had not promptly reconciled its subsidiary records with its general ledger control account for undelivered orders as of September 30, 2000. Adjustments for this reconciliation were not recorded until almost 7 months later on April 28, 2001. Further, subsidiary records of accounts payable and undelivered orders by vendor were not included as part of the standard system reports, hampering efforts to appropriately manage these accounts. Our review and testing of key internal controls for these accounts found conditions to be much worse than reported by the bureau or its auditor. The weaknesses we identified included (1) erroneous information that had not been promptly corrected and (2) ineffective controls to accrue liabilities when goods or services have been delivered but not paid. These weaknesses contributed to additional errors in accounts payable and undelivered order balances as of September 30, 2000. Specifically, the effect of the errors we identified as of September 30, 2000, was a 10- percent overstatement of the audited $457 million undelivered order balance and a 20-percent understatement of the audited $134 million accounts payable balance. While these errors were evidence of weak internal controls and were significant to the undelivered orders and accounts payable line items, these errors alone were not material to the U.S. Census Bureau’s $1.2 billion fiscal year 2000 balance sheet and other financial statements. The primary cause of these problems was erroneous information in subsidiary records for undelivered orders that was not corrected in a timely manner as noted in the following examples. Year-end adjustments of $65 million had to be manually posted, which did not occur until 7 months later on April 28, 2001. Undelivered orders subsidiary records as of September 30, 2000, contained $46 million for a contract that had been liquidated in February 1999. This error was finally corrected as part of the above $65 million of year-end adjustments to reconcile a subsidiary report to the general ledger. Subsidiary records for three undelivered orders balances totaling $5 million were duplicated. This duplication occurred because the bureau changed the document type code of these undelivered orders but did not remove the old document type code from the subsidiary records. Undelivered orders for services for three contracts totaling $20 million (previously discussed as part of our $55 million of proposed deobligations) had been completed as of September 30, 2000, and could have been deobligated. Undelivered orders also included $27 million of goods and services that had been delivered prior to September 30, 2000. The effect of these deliveries is a decrease in undelivered order balances and a corresponding increase in accounts payable balances as of September 30, 2000. We identified about $26 million by reviewing 95 subsequent disbursements over $500,000 from October 1, 2000, through January 31, 2001. The remaining $1 million was identified from a further examination of contracts, interagency agreements, purchase orders, invoices, and payments for account balances over $1 million through June 4, 2001. The U.S. Census Bureau’s financial management systems encompass the software, hardware, personnel, manual and automated processes, procedures, internal controls, and data necessary to carry out financial management functions, manage financial operations, and report financial status. Weaknesses in the bureau’s financial management systems affect the completeness, accuracy, and timeliness of data needed for oversight and informed management decisions. The bureau has experienced persistent financial management systems problems for many years and has candidly acknowledged the material nonconformance of its financial systems in annual FMFIA reports. The bureau’s fiscal years 1999 and 2000 FMFIA reports stated that further work was required to improve routine bureau reporting and systems controls. Extensive systems weaknesses and errors were found by the bureau and its auditors, and by us during our work on certain fiscal year 2000 bureau account balances. In addition, Commerce officials acknowledged significant fiscal year 2001 efforts to correct existing weaknesses in the bureau’s core financial management system. We do not agree with the bureau’s assertion in its annual financial report for fiscal year 2000 that its financial management systems substantially comply with FFMIA requirements. FFMIA was intended to advance federal financial management by ensuring that federal financial management systems can and do routinely provide reliable, timely, and consistent disclosure of financial data. FFMIA requires that agencies implement and maintain systems that substantially comply with federal financial management systems requirements, as contained in OMB Circular A-127, Financial Management Systems, the Joint Federal Management Improvement Program’s (JFMIP) Framework for Federal Financial Management Systems, and JFMIP’s Core Financial System Requirements; applicable federal accounting standards; and the U.S. Standard General Ledger (SGL) at the transaction level. In its fiscal year 1999 and 2000 reports of compliance with laws and regulations, the bureau’s auditing firm stated that the results of its tests disclosed no instances in which the bureau’s financial management systems did not substantially comply with FFMIA. However, in its fiscal year 1998 report on compliance with laws and regulations, the bureau’s previous auditing firm stated that the bureau did not substantially comply with FFMIA, and many of the same weaknesses this firm reported have continued. In our view, the bureau’s financial management systems did not substantially comply with the three requirements of FFMIA as of September 30, 2000, as discussed below. First, we believe that the bureau’s systems did not substantially comply with the federal financial management systems requirements. This is because the systems could not provide reliable and timely financial information to manage current government operations and prepare financial reports. The systems weaknesses were the result of a number of factors, including data quality and systems design issues. For example, the bureau’s auditors reported the following weaknesses in the bureau’s core financial management system, the Commerce Administrative Management System (CAMS). For fiscal year 1999, the auditors reported a material weakness in the ability of CAMS to produce necessary reports routinely and on time to meet internal and audit requirements. For example, CAMS was unable to prepare usable undelivered order and accounts payable subsidiary reports. In addition, the auditors reported a material weakness in certain information system controls, including the lack of a security plan for CAMS and needed improvements in the operating system that supported CAMS. For fiscal year 2000, the auditors determined that the bureau had made some systems improvements in CAMS and concluded that the reporting and controls were a reportable condition rather than a material internal control weakness. For example, the auditors reported in fiscal year 2000 that CAMS could not distinguish adjustments that occur between preliminary and final balances. In addition, reports required by Treasury, such as the SF-224, Statement of Transactions, and the SF-133, Report on Budget Execution and Budgetary Resources, were not supported by CAMS and had to be manually prepared. Further, the auditors continued to have concerns about weaknesses in bureau security planning, management, and access controls. A penetration test conducted by the auditors indicated that bureau systems and data were vulnerable to unauthorized access, although no actual instances of unauthorized access were detected. GAO’s Standards for Internal Controls in the Federal Government highlight the need for adequate control over automated information systems to ensure protection from inappropriate access and unauthorized use by hackers and other trespassers or inappropriate use by agency personnel. We concurred with these auditor reports and, in conducting our test work of accounts payable and undelivered orders, identified several other serious systems deficiencies as follows. For fiscal year 2000, CAMS could not produce subsidiary records of accounts payable and undelivered orders by vendor to support its general ledger balances. Bureau officials stated that CAMS is a transaction-based system that has accumulated about 18 million records since it was initially installed on October 1, 1997. These records represent transactions for invoices, payments, and adjustments that would have to be sorted by vendor codes in order to obtain detail balances by vendor. Even if this were done, bureau officials expect their efforts to be hampered by thousands of balances of less than $1 because of small differences created by estimating and rounding of invoices and payments. CAMS had no archive capability to store millions of completed transactions to reduce volume and processing time and highlight errors, duplicates, and larger balances. According to Commerce officials responsible for CAMS, the archiving capability has not been a priority since new and faster computer hardware in fiscal year 2001 has reduced processing time for accounts payable and undelivered order subsidiary transaction reports from 22 hours to 2 hours. However, these subsidiary reports still remain unusable as a day-to-day reporting and control tool because they continue to include a large volume of completed transactions and are not sorted by vendor. Because CAMS subsidiary records for payables, receivables, property, and undelivered orders were not integrated with the general ledger, extensive manual reconciliation and workarounds were required, which are time- consuming and error prone. As a result of these deficiencies, the bureau took 3 months to provide us with an electronic file of undelivered order transactions that reconciled to the CAMS general ledger before intra- agency elimination entries. Even after this lengthy delay in obtaining the file, we still had to sort the transactions by vendor to obtain the September 30, 2000, undelivered order balance by vendor in order to conduct our testing. Further, the bureau was unable to provide a usable accounts payable listing by vendor as of September 30, 2000. Second, we believe that the bureau’s financial management systems for fiscal year 2000 did not produce information that substantially complied with applicable federal accounting standards. This occurred because systems weaknesses affected the ability of the bureau to prepare its fiscal year 2000 financial statements and reports in accordance with GAAP. According to the January 4, 2001, OMB guidance for determining compliance with FFMIA, indicators of noncompliance with accounting standards would include the material weakness reported by the auditors on the bureau’s difficulties and delays in producing complete and accurate financial statements and related disclosures. The material adjustments necessary to fairly present statements at year-end and the significant errors we found in accounts payable and undelivered order balances are evidence of serious weaknesses that impede compliance with GAAP. Finally, we believe that the bureau’s financial management systems did not substantially comply with the last requirement of FFMIA regarding the SGL at the transaction level. This is because the bureau’s subsidiary feeder systems do not interface with the CAMS core financial system general ledger for timely posting to the SGL at the transaction level. We noted that for fiscal year 2000, CAMS did not support reconciliation of SGL control accounts to their respective subsidiary records, such as the Undelivered Orders Report (FM 109) to the CAMS control accounts. Further, CAMS could not provide a usable accounts payable listing by vendor as of September 30, 2000, that would indicate the SGL accounts at the transaction level. According to Commerce officials, a number of improvements to the CAMS core financial system were made during fiscal year 2001, including improved capability to routinely generate usable detail subsidiary reports for advances, accounts receivable, accounts payable, undelivered orders, and unfilled customer orders; a new process to test closing entries and run trial balances, enter audit adjustments, and generate postclosing balances; certification tests of software that examined 176 functional areas and identified 9 exceptions for a compliance rate of 95 percent; and cleanup of large volumes of unmatched transactions due to missing or erroneous vendor, customer, or document numbers from faulty feeder systems to improve reports needed for the annual audit process. We did not assess these reported improvements. The fiscal year 2001 financial closing and independent audit of the bureau will help determine the effectiveness of these actions. Further, Commerce officials stated that standard interfaces for accounts receivable and payable, which will allow source feeder system data to be posted at the transaction level to the CAMS general ledger more timely and accurately, are being built and are expected to be available by July 2002. As agreed, we also obtained financial system information and reviewed financial policies on other selected financial areas at the U.S. Census Bureau. These areas, presented in appendix III, were personnel and benefit expenditures, fund balance with Treasury, property and equipment, and government credit cards. The following are two areas of concern. For personnel expenditures, some salary amounts were charged to the wrong projects due to errors in time charge coding. This distorted variances when compared to planned amounts and created inaccurate measures of performance for selected projects. Projects affected were remote Alaska enumeration and advance visits for service-based enumeration. These errors occurred because bureau supervisors and timekeepers did not closely review project codes used by employees on timesheets. For property and equipment, over $158 million, about 57 percent, of total undepreciated accountable property in the bureau’s records was not reported on the bureau’s balance sheet as of September 30, 2000. Federal accounting standards state that property with a useful life of 2 years or more be capitalized but allow each agency to establish its own dollar threshold for capitalization. Any amounts under that limit would be expensed. However, a limit that is too high understates the reported amount of property and equipment possessed by the bureau. For fiscal years 1996 and prior, the bureau capitalized property and equipment with an acquisition cost of $5,000 or more; it changed to a $25,000 limit for new property acquired in fiscal year 1997 and subsequent years. We recommend that the Secretary of Commerce ensure that the U.S. Census Bureau take the following actions: deobligate at least $55 million for contracts we identified for which work has been completed and amounts are not needed to close out contracts; review the remaining $90 million of undelivered order balances as of September 30, 2000, to identify and deobligate amounts not needed for those orders; instruct accounting personnel to follow the written policy for establishing accruals and proper cutoff for goods and services received at year-end; post accounting adjustments to subsidiary records in a timely manner; complete efforts to modify the bureau’s financial systems to produce usable accounts payable and undelivered order subsidiary reports by vendor, close out thousands of completed transactions with small balances, and archive all completed transactions; amend policies and procedures to require supervisors to closely review employee time charges and project codes to more accurately reflect project costs for salaries and benefits; and reconsider the bureau’s property and equipment capitalization threshold, as the current policy did not recognize about 57 percent of the bureau’s total gross accountable property and equipment as of September 30, 2000. In commenting on a draft of this report, the Department of Commerce, U.S. Census Bureau, agreed with five of our seven recommendations and made a number of comments on our specific findings. The most significant of the bureau’s specific comments relates to our finding that the bureau did not comply with FFMIA requirements for the year ended September 30, 2000. This section of the report addresses the bureau’s disagreement with the two recommendations and our conclusion regarding compliance with FFMIA. The bureau’s remaining specific comments are addressed in appendix IV. In general, the bureau stated that it is important to underscore its overarching success in managing the $4.5 billion budget appropriated for the 2000 census and that any analysis of the bureau’s financial management system should acknowledge this significant achievement. However, the objectives of our budget review did not include assessing the efficiency of expenditures and obligations against planned budget appropriations. Rather, the objective of our review was to analyze budget variances reported by the U.S. Census Bureau and to identify other potential variances for the 2000 decennial census, including the reasons for these variances. Further, the bureau is still assessing the efficiency of the 2000 census in its postenumeration review, which will not be completed until fiscal year 2003. As stated in the introduction to this report, this product is one of several we will be issuing in the coming months on lessons learned from the 2000 census. As was done after the 1990 census, we are currently reviewing key operations of the 2000 census. With regard to the two recommendations with which the bureau did not agree, the first related to our call for the bureau to deobligate at least $55 million for contracts for which the work had been completed and no further amounts were needed. The bureau stated that the $55 million should not have been deobligated as of September 30, 2000, because final closeout, which may include late cost determinations, often occurs well after the period of performance has expired. However, the bureau indicated its agreement with the estimated amounts to be deobligated, stating that it had included an estimated $28 million in its fiscal year 2001 budget for prior-year recoveries and that the remaining $27 million would be used to partially offset its fiscal year 2002 appropriation. The bureau’s primary concern was the timing of when these funds would be available. As indicated in the body of this report, three contracts had amounts totaling $20 million that should have been deobligated as of September 30, 2000, while three other contracts were completed and had amounts totaling $35 million that were available for potential deobligation by the time we completed our review in June 2001. As stated in our report, we recognize that an agency should identify and deobligate funds no longer needed after the agency is certain no related costs remain to be paid. Thus, the report takes into account the bureau’s point that contract closeout may take some time beyond the end of the fiscal year. The second recommendation with which the bureau disagreed related to the bureau’s property and equipment capitalization threshold. The bureau stated that the Department of Commerce had issued a capitalization threshold of $25,000 for individual purchases, which Census then adopted. We did not evaluate the appropriateness of the $25,000 threshold for the Department of Commerce as a whole and agree that the amount of the bureau’s net property and equipment appears to be insignificant at that level. However, we continue to recommend that the threshold be considered and evaluated separately for the U.S. Census Bureau in light of the fact that the higher threshold resulted in eliminating 57 percent of the bureau’s accountable property and equipment from its balance sheet— which is significant to the bureau. Finally, the primary finding with which the bureau disagreed in its specific comments was our assessment that the bureau’s financial management systems did not substantially comply with the requirements of FFMIA as of September 30, 2000. The bureau stated that its financial management system complied with all FFMIA requirements and identified three issues as key support for its position. We disagree with each of these points and continue to believe that the bureau’s financial management systems were not FFMIA compliant. First, the bureau indicated that its financial systems were able to support a $4.5 billion budget, and bureau managers used financial management reports to meet all statutory deadlines and complete the 2000 operations on time and under budget. As stated previously, the objectives of this report did not include a qualitative analysis of the bureau’s performance in conducting the 2000 census. However, our report does describe several instances of noncompliance with FFMIA requirements. For example, as stated in the body of this report, the bureau’s auditors reported as a material internal control weakness that reports required by Treasury such as the SF-224, Statement of Transactions, and the SF-133, Report on Budget Execution and Budgetary Resources, were not supported by the bureau’s financial management systems and had to be manually prepared. While the bureau’s response cited OMB’s January 4, 2001, guidance on FFMIA implementation in support of its position, the bureau did not include a key sentence from the guidance, which states, “Auditors then need to use judgement in assessing whether the adverse impacts caused by identified deficiencies are instances of substantial noncompliance with FFMIA.” In our view, the above material weakness was clearly evidence of substantial noncompliance as of September 30, 2000. Second, the bureau stated that CAMS met 95 percent of JFMIP core requirements and used standard general ledger accounts as required. Also, although the bureau agreed that manual processes are required, it cited OMB guidance that states systems need not be entirely automated to be FFMIA compliant. Our report acknowledges that, according to bureau officials, a number of improvements were made to the CAMS core financial system subsequent to September 30, 2000. Although we did not assess these reported subsequent improvements, their description, number, and significance supports our belief that CAMS did not comply with FFMIA as of September 30, 2000. Further, Commerce officials cited additional work to be done such as building standard interfaces for accounts receivable and payable, which are not expected to be available until July 2002. In addition, we agree that the use of manual processes does not necessarily indicate FFMIA noncompliance. However, the deficiency we pointed out was the CAMS lack of integrated subsidiary records with the general ledger, requiring extensive manual reconciliation and workarounds, which are time consuming and error prone. At the transaction level, we reported problems with timely posting to the SGL and the reconciliation of SGL control accounts to their respective subsidiary records, as well as the inability to provide a usable accounts payable listing by vendor. Third, according to the bureau, its financial management systems substantially comply with federal accounting standards because the errors and year-end adjustments we identified in accounts payable and undelivered orders were not “material enough” to warrant a finding of noncompliance. Our report contains several indicators of noncompliance with accounting standards, which are consistent with OMB guidance on this issue. Specifically, evidence of serious weaknesses that impede compliance with accounting standards includes (1) the material weakness reported by its auditor on the bureau’s difficulties and delays in producing complete and accurate financial statements and related disclosures, (2) the material adjustments necessary to fairly present statements at year- end, and (3) the significant errors we found in accounts payable and undelivered order balances. We are sending copies of this report to the Chairman and Ranking Minority Member, Senate Committee on Governmental Affairs, and the Chairman and Ranking Minority Member, Committee on Government Reform. We are also sending copies to the Acting Director, U.S. Census Bureau; the Secretary and Inspector General of the Department of Commerce; the Director of the Office of Management and Budget; and the Secretary of the Department of the Treasury, and other interested parties. This report will also be available on GAO’s home page at http://www.gao.gov. If you or your staffs have any questions on this report, please contact me at (202) 512-9095 or by e-mail at [email protected] or Roger R. Stoltz, Assistant Director, at (202) 512-9408 or by e-mail at [email protected]. Key contributors to this report are listed in appendix V. The objectives of our work were to (1) analyze budget variances reported by the U.S. Census Bureau and to identify other potential variances for the 2000 decennial census, including the reasons for these variances, and (2) review key financial internal controls of the bureau and report on any weaknesses noted in selected financial areas. We did not assess the efficiency of expenditures and obligations against planned budget appropriations. To fulfill the objective on budget variances, we obtained and reviewed bureau documents to support the reported savings, analyzed financial variances, and interviewed bureau personnel for explanations on variances down to the project level. We reconciled supporting balances of undelivered orders by vendor subsidiary accounts to the general ledger balance and examined supporting contracts, interagency agreements, purchase orders, invoices, and subsequent payments on all vendor balances of $1 million or more as of September 30, 2000, through June 4, 2001. To determine the validity of undelivered orders we examined supporting documentation and discussed their status with bureau officials. The 70 vendor balances of $1 million or more that we tested constituted about 73 percent of the total bureau undelivered orders of $504 million as of September 30, 2000. Interagency adjustments for the bureau’s working capital fund of $33 million and year-end closing adjustments of $14 million reduced undelivered orders to $457 million in the bureau’s financial statements as of September 30, 2000. We did not test about $90 million representing over 7,300 balances summarized by vendor and over 43,000 individual transactions under $1 million of undelivered orders as of September 30, 2000. Additionally, the scope of our work did not include determining whether the use of contractors was appropriate for the involved activities. We also tested for unrecorded liabilities as of September 30, 2000, by examining bureau disbursements over $500,000 from October 1, 2000, through January 31, 2001, to determine when goods or services had been received and to identify potential unrecorded liabilities for the 2000 decennial census. We did not audit the bureau’s fiscal year 2000 financial statements and therefore we do not express an opinion on them. We also obtained but did not audit preliminary enumerator productivity data provided by the bureau. Productivity issues were recently addressed in a separate GAO report. To fulfill the objective on key financial internal controls, we read bureau reports, performed undelivered order balance and unrecorded liability tests discussed above, and interviewed bureau officials. For selected financial areas, we also obtained bureau system information and read policies and procedures. For fiscal year 1999 and 2000, we reviewed FMFIA weaknesses reported by the bureau to the Department of Commerce for inclusion in the Department’s annual accountability reports. We also reviewed the bureau’s fiscal year 1998, 1999, and 2000 audited financial statement reports, and two fiscal year 2000 management letters and noted internal control weaknesses reported by the bureau’s independent auditors. We also identified internal control weaknesses as a result of our testing and observations at the bureau. In addition, we reviewed financial system information and financial polices and interviewed bureau officials on fund balance with Treasury, property and equipment, salaries and benefits, and government small purchase and travel credit cards, but did not audit this information. We also met with Commerce IG officials and representatives of the independent auditing firm to discuss the fiscal year 2000 audit of the bureau’s financial statements. We reviewed the auditing firm’s working papers in selected areas of accounts payable, other liabilities, undelivered orders, fund balance with Treasury, and property and equipment as of September 30, 2000, and personnel salaries and benefits for fiscal year 2000. We also reviewed the auditor’s audit approach, interim work, and compliance work on FFMIA. We performed our work at bureau headquarters in Suitland, Maryland, and in the office of the bureau’s fiscal year 2000 auditing firm in Washington, D.C. Our work was performed from December 2000 to June 2001 in accordance with U.S. generally accepted government auditing standards. On December 7, 2001, we received comments on a draft of this report from the Department of Commerce, U.S. Census Bureau. These comments are presented in the “Agency Comments and Our Evaluation” section of this report and are reprinted in appendix IV. Based on analysis and interviews with U.S. Census Bureau officials, the following are bureau explanations for fiscal year 2000 project variances over $5 million (by reference number noted in table 4). 1. According to the bureau, half of the positive variance of $6 million for project 6301 for definition of geographic area was caused by overestimating the number of geographic staff required to carry out the participant statistical area program. The remaining half of the positive variance is attributed by the bureau to fewer geographic clerks in the National Processing Center (NPC) for the Boundary and Annexation Survey mailing and response processing. This resulted in lower salary and benefit costs for about 224 FTEs. 2. The negative variance of $20 million for project 6331 for nonresponse follow-up assignment control was caused by an underestimate of funds for salary, travel, and other costs. According to the bureau, the underestimate was caused by an unanticipated activity of rescheduling and assigning enumerators to recount about 122,000 questionnaires that were lost between the local census offices and the data capture centers. 3. The negative variance of $26 million for project 6333 for nonresponse follow-up assignment preparation was because of an underestimate of funds for salary, travel, and other costs. According to the bureau, this underestimate was caused by unplanned activity of preparing information packages for enumerator to recount over 600,000 cases in which questionnaires did not indicate the number of persons living in the household. 4. The positive variance of $8 million for project 6335 for the Be Counted and Questionnaire Assistance Center programs was caused by the lower workload that resulted in salary, travel, and other cost savings. According to the bureau, the 886,000 actual addresses processed were 10 percent less than the 980,000 addresses planned for the program. The bureau believed this occurred because it had a better address list than originally anticipated so people did not need to rely on the Be Counted forms available at walk-in centers, such as community centers, churches, libraries, post offices, and other public facilities. Also, many of the Be Counted forms received were not used as households were already counted as a part of the regular mail out and mail back process.5. A negative variance of $53 million occurred for project 6336 for coverage improvement follow-up that resulted in higher salary, travel, and other costs. According to the bureau, the variance was because of the unanticipated need to verify 1.5 million new addresses as vacant and to identify about 68,000 households to be recounted by enumerators for one area because the accuracy of the enumeration was in question. 6. A positive variance of $31 million occurred for project 6337 for group quarters enumeration in places like prisons, nursing homes, and school dormitories. According to the bureau, the variance was because of a two-thirds lower workload of 172,000 actual units rather than the 519,000 units planned, which resulted in salary, travel, and other cost savings. The bureau believes this occurred because some of the addresses it planned for group quarters enumeration were actually businesses, commercial establishments, duplicates, or nonexistent. 7. A positive variance of $8 million occurred for project 6338 for list enumeration in areas where residences do not receive mail delivery, such as rural areas where mail is sent to post office boxes. According to the bureau, the variance occurred because of a 25-percent lower workload of 372,000 actual units rather than the 500,000 units planned, which resulted in salary, travel, and other cost savings. The bureau believes this occurred because of inaccurate address lists that determined planned amounts. 8. The negative variance of $87 million for project 6339 for nonresponse follow-up enumeration was caused by higher costs than planned for enumerator salary and benefit costs and a higher workload for unanticipated recounts. According to bureau data, enumerators were paid almost $1 an hour more than the average $11.77 an hour planned in order to recruit a sufficient number of quality temporary workers in a tight labor market. In addition, almost 3,300 more FTE enumerators were hired than planned. This occurred because of the bureau’s front- loaded staffing strategy that anticipated a 150-percent turnover of enumerators coupled with a higher workload for unexpected household visits to recount almost 800,000 incomplete, lost, or inaccurate questionnaires for over 600,000 cases in which returned questionnaires did not indicate the number of persons living in the household (preparation costs are item #3), 122,000 questionnaires that were lost between completion by local census offices and processing by the data capture centers (assignment costs are item #2), and 68,000 questionnaires that were recounted because the accuracy of the enumeration was in question. (Identification costs are item #5.) 9. The positive variance of $7 million for project 6341 for remote Alaska enumeration was because, according to the bureau, salary and other costs were erroneously charged to project 6516 for enumerating special populations. (See #19.) 10. A positive variance of $32 million occurred in project 6342 for service based enumeration in places like soup kitchens, homeless shelters, and areas frequented by persons with no fixed addresses. According to the bureau, the variance was caused by an 80-percent lower workload of 14,000 actual sites verses the 69,000 sites planned, which resulted in salary, travel, and other cost savings. The bureau believes that the majority of the planned sites did not exist and were added to address lists because of inaccurate national and local data. Also, bureau officials stated that some costs of advance visits were erroneously charged to project 6516 for enumerating special populations. (See #19.) 11. The positive variance of $95 million for project 6447 for data collection A (intercity support staff in 102 local census offices) was caused by lower support workloads than planned, and the higher than expected mail response rate was a contributing factor. According to the bureau, this resulted in cost efficiencies in logistical support salaries of 3,578 FTEs compared to the 6,416 FTEs planned, for savings of 44 percent. 12. The positive variance of $55 million for project 6448 for data collection B (suburb support staff in 51 local census offices) was caused by lower support workloads than planned, and the higher than expected mail response rate was a contributing factor. According to the bureau, this resulted in cost efficiencies in logistical support salaries of 1,692 FTEs compared to the 3,221 FTEs planned, for savings of 47 percent. 13. The positive variance of $134 million for project 6449 for data collection C (small town support staff in 316 local census offices) was caused by lower support workloads than planned, and the higher than expected mail response rate was a contributing factor. According to the bureau, this resulted in cost efficiencies in logistical support salaries of 10,723 FTEs compared to 16,480 FTEs planned, for savings of 35 percent. 14. The positive variance of $14 million for project 6450 for data collection D (rural support staff in 42 local census offices) was caused by lower support workloads than planned, and the higher than expected mail response rate was a contributing factor. According to the bureau, this resulted in cost efficiencies in logistical support salaries of 1,633 FTEs compared to the 2,532 FTEs planned for savings of 35 percent. 15. According to the bureau, the negative variance of $7 million for project 6455 for kit preparation was caused by higher than planned staff overtime and priority shipping costs to ensure delivery of materials and supplies to the 520 temporary local census offices to begin enumeration. 16. The positive variance of $11 million for project 6470 for regional direction and control was caused by lower support staff workloads than planned in 12 regional census offices, and the higher than expected mail response rate was a contributing factor. According to the bureau, this resulted in cost efficiencies in logistical support salaries of 1,161 FTEs compared to the 1,518 FTEs planned, for savings of 24 percent. 17. The positive variance of $39 million for project 6480 for ACE collection activity was caused by a lower than planned workload of cases requiring personal visits. According to the bureau, this resulted in lower data collection costs for staff salaries and benefits of $33 million, lower equipment costs of $4 million, and lower office rental costs of $2 million. The purpose of ACE was to estimate the population for purposes such as redistricting by sampling households from an address list developed independently of the address list used to enumerate the census. The results of ACE (formerly Integrated Coverage Management) were to have been statistically combined with the results of enumeration to form a single, integrated set of population counts. 18. According to the bureau, the positive variance of $36 million for project 6510 for regional and local census office support was caused by lower support workloads, which resulted in lower office space and equipment costs than planned. The mail response rate was a contributing factor to the lower support workloads. 19. According to the bureau, the negative variance of $18 million for project 6516 for enumerating special populations was caused by charging salary and other costs erroneously charged from Remote Alaska (see #9) and some costs of advance visits for service-based enumeration (see #10). 20. According to the bureau, the positive variance of $31 million for project 6540 for telecommunications was the result of renegotiating a 90-percent reduction in FTS 2000 long distance phone costs from a planned 10 cents a minute to an actual 1 cent a minute. 21. According to the bureau, the positive variance of $6 million for project 6910 for automated acquisitions was primarily the result of lower computer equipment costs and lower headquarters telephone support costs than planned. 22. According to the bureau, the positive variance of $14 million for project 6912 for telephone questionnaire assistance was caused by lower contractor costs in running the program since the number of inbound calls of 6 million was 45 percent lower than the 11 million calls planned. 23. The negative variance of $5 million for project 6406 for address list capture was caused by the higher than planned workload for support staff updating enumerator maps and the address database. According to the bureau, planned workload was 2.4 million map sheets while the actual workload was over 150 percent higher at 6.1 million map sheets because the number of new and corrected locations was higher than anticipated. 24. The positive variance of $7 million for project 6408 non-ID processing was caused by the lower than planned workload for support staff to match and code questionnaires for the Be Counted and Telephone Assistance programs. According to the bureau, planned workload was 2.5 million addresses and actual workload was 52 percent lower at 1.2 million addresses because more information was already contained in mailed questionnaires than anticipated. 25. According to the bureau, the negative variance of $9 million for project 6918 for the DCS 2000 contract was caused by higher contractor costs than planned to scan census questionnaires in two passes. The first pass scanned six basic questions on both the short and long forms, and the second scanned the balance of long form responses. The bureau refers to this as the two-pass approach to data capture. 26. The positive variance of $22 million for project 6922 for NPC data capture operations was caused by a lower workload than planned for forms to process and higher productivity for data entry. According to the bureau, 5 million fewer mail back, enumerator, and group quarters forms were processed, and data entry productivity of 6,500 keystrokes per hour was higher than the 5,200 keystrokes per hour planned. 27. The positive variance of $55 million for undelivered order deobligation is discussed in the body of this report. The six contracts concerned can be identified by framework but could not be identified to the project level as contracts may involve multiple projects. As agreed with our requesters, we also reviewed financial system information and financial policies and interviewed officials on other selected financial areas at the U.S. Census Bureau, but did not audit this information. These selected areas were bureau personnel and benefits expenditures, fund balance with Treasury, property and equipment, and government small purchase and travel credit cards. No significant weaknesses or areas for improvement were noted during our work in these areas, except for the two areas noted in the body of this report related to personnel expenditures and property and equipment capitalization thresholds. For the bureau’s personnel and benefit expenditures, we noted the following. This area was the largest fiscal year 2000 expenditure consisting of $2,475 million, or 58 percent, of total expenditures of $4,259 million for periodic census and programs, including decennial census. Policies and procedures existed to provide guidance and internal controls. A contractor tested the Pre-Appointment Management System (PAMS) and Automated Decennial Administrative Management System (ADAMS) temporary employee payroll system with no material exceptions noted. In March 2000, the Commerce IG conducted an evaluation of PAMS/ADAMS and found areas where software practices needed improvement but concluded that the systems should provide adequate support for the 2000 decennial census. As discussed in appendix II, coding errors caused some salary amounts to be erroneously charged to other projects. The bureau’s independent auditor review of internal controls and substantive tests noted no significant issues with personnel and benefits for fiscal year 2000. For the bureau’s fund balance with Treasury, we noted the following. The bureau’s balance sheet showed $1.1 billion as of September 30, 2000. Policies and procedures existed to provide guidance and internal controls. One central disbursing function at the bureau’s headquarters in Suitland, Maryland controls payments to vendors and employees. A contractor performs monthly reconciliation of fund balances, and differences are investigated promptly by bureau financial accounting staff. The bureau’s independent auditor review of internal controls and substantive tests noted no significant issues with fund balance with Treasury for fiscal year 2000. For the bureau’s property and equipment, we noted the following. The bureau’s balance sheet showed $121 million at an acquisition cost of $25,000 and over as of September 30, 2000. As discussed in the body of this report, this amount represented only 43 percent of total bureau gross accountable property. After accumulated depreciation, net property and equipment was valued at $47 million. Policies and procedures existed to provide guidance and internal controls. The bureau can provide detailed lists of its accountable property, including capitalized, noncapitalized, and sensitive property. The bureau conducted physical wall-to-wall inventories of all its accountable property during fiscal year 2000 that were observed by the independent auditors and the Commerce IG’s staff. A contractor performs a monthly reconciliation of the equipment included in the subsidiary property record system, the Accountable Property Management System (APMS), to the bureau’s general ledger function contained in CAMS. In March 2000, the Commerce IG conducted a review of accountable property at the 12 regional data census centers and found areas where internal controls could be improved in the recording of property in APMS, the observation of annual physical inventories, and the documentation of transfers of property. The bureau’s independent auditor review of internal controls and substantive tests noted no significant issues with property and equipment for fiscal year 2000. For the bureau’s government credit card program for small purchase cards and employee travel cards, we noted the following. The bureau uses Citibank Visa credit cards for expenses for official government travel and for small purchases, such as supplies and services. As of June 12, 2001, the bureau had 3,950 travel credit cards outstanding. As of June 22, 2001, the bureau had 316 purchase cards outstanding. The bureau has policies and procedures regarding issuance and use of credit cards to provide guidance and internal controls and appears to be following them. Small purchase cards were issued to 79 temporary employees and were subject to the same policies and procedures regarding issuance, use, and separation as for permanent employees. The bureau had a policy not to issue government travel cards to temporary employees. Small purchase cards have 30-day spending limits ranging from $5,000 to $200,000, and travel cards have a $5,000 credit limit, but increases can be obtained as needed. The bureau has a program to monitor credit card use and delinquent payments; however, we did not assess the adequacy of this program. The bureau pays all small purchase card bills monthly so there are no delinquent accounts. Bureau employees must pay their travel card bills promptly and are reported to the bureau if over 60 days delinquent. As of June 15, 2001, Citibank reported only about $13,400 in travel card bills over 60 days delinquent, including one account for about $4,000. In March 2000, the Commerce IG conducted a review of small purchase credit cards and concluded that the bankcard program was well managed. The following are GAO’s comments on the letter dated December 7, 2001, from the Department of Commerce, U.S. Census Bureau. 1. See the “Agency Comments and Our Evaluation” section of this report. 2. Our report clearly lays out the source and status of the budget variances we identified. We note that the bureau’s restatement of our findings still concludes with the same dollar amounts we reported. 3. We have modified footnote 10 to add that the appropriation for Periodic Censuses and Programs established a 3-day notification period for reprogramming funds provided by the appropriation. 4. The bureau’s increased reliance on private contractors was stated factually, and the scope of our work did not include determining whether the use of contractors was appropriate for the involved activities. The human capital issue described in the body of this report reflects a concern that the bureau did not have a sufficient number of experienced accountants familiar with the financial statement process and able to effectively monitor year-end reconciliations or provide needed accounting support on a day-to-day basis. While not specifically disclosed as part of an independent auditor’s report on internal control, the assessment of the bureau’s internal control environment as high risk by its independent auditor was supported by the auditor’s working papers, which were within the scope of our work as stated in appendix I, “Objectives, Scope, and Methodology.” 5. We have modified the report to indicate that the FMFIA weakness reported by the bureau seriously affected the bureau’s ability to conform with generally accepted accounting principles. 6. The bureau’s response presented new information on system testing that was not mentioned throughout our discussions with key bureau officials, up to and including our exit conference on June 15, 2001. However, in our experience, system testing is usually done on downloaded offline systems data so that online data needed for management are not altered, disrupted, or lost. To delay posting $65 million of year-end adjustments for 7 months due to systems testing would, in our view, seriously hamper financial management activities. This is because, during this period, system personnel would be using automated screen inquiries for individual contracts that contain incorrect and misleading data and would have to refer to a manual tracking list of adjustments, a cumbersome and error-prone process. 7. Bureau officials told us, in the context of accounts payable and undelivered order transactions, that about 18 million records accumulated since the installation of CAMS on October 1, 1997. The bureau’s response that 100 million transaction records have accumulated since CAMS’ inception provides further evidence that the bureau should develop archive capability to store millions of completed transactions in order to reduce volume and processing time and to highlight errors, duplicates, and larger balances. Staff members who made key contributions to this report were Cindy Barnes, Linda Brigham, Amy Chang, and Peggy Smith.
In September 2000, the U.S. Census Bureau told Congress that it had at least $305 million in budget savings out of its $4.5 billion fiscal year 2000 no-year appropriations for the 2000 decennial census. Of the $4.5 billion appropriated to the U.S. Census Bureau in fiscal year 2000, lower-than-expected expenditures and obligations resulted in available balances of at least $415 million. A lower-than-expected support staff workload reduced salary and benefit costs by about $348 million. Enumerator workload is largely determined by the initial mail response rate for returned census questionnaires. The initial mail response of 64 percent meant that Census enumerators did not have to visit more than three million American households. However, the available balances from the higher mail response rate and the lower support staff workload were partially offset by about $100 million of higher salary and benefit costs for enumerators, including a higher workload for unanticipated recounts. According to Bureau data, enumerator productivity did not significantly affect budget variances for the 2000 decennial census. The Bureau reported the national average time to visit a household and complete a census questionnaire was about the one hour estimated. Because of significant internal control weaknesses, the Bureau was unable to develop and report complete, accurate, and timely information for managing decision-making. Specific control weaknesses for fiscal year 2000 were related to the lack of controls over financial reporting and financial management systems. Financial reporting issues included (1) the inability to produce accurate and timely financial statements and other financial management reports needed for oversight and day-to-day management; (2) the lack of timely and complete reconciliations needed to validate the balances of key accounts; and (3) unsupported and inaccurate reported balances for accounts payable and undelivered orders--two key accounts needed to manage and report on unliquidated obligations.
According to IRS, during the mid- to late-1990s, abusive tax schemes reemerged across the country after last peaking in the 1980s. As previously mentioned, during hearings held in 2001, several witnesses testified about the increased promotion and use of such tax schemes. Marketing of these schemes by promoters through the use of the Internet was mentioned as giving them a thriving new life by making them available to millions of taxpayers easily and inexpensively. Schemes run from simple to very complex, from clearly illegal to those carefully constructed to disguise the illegality of the scheme. Furthermore, users of schemes can range from those believing their position is correct to those who knowingly and willfully file incorrect tax returns. Some schemes are created by tax professionals such as accountants, lawyers, and paid tax preparers, and by groups and individuals. Tax schemes are offered to taxpayers using various means, including conferences or seminars, publications, advertisements, and the Internet. Others are promoted by word-of-mouth. To determine the types of activities IRS considers to be abusive tax schemes, we interviewed cognizant IRS officials from SB/SE, who have specialized expertise in abusive tax schemes issues. In addition, we obtained pertinent IRS documents related to abusive tax scheme activities, including information from IRS’s CI Internet site and the Small Business and Self-Employed Strategic Assessment Report. To determine the extent or magnitude of the abusive tax scheme issue, we obtained and reviewed estimates from IRS’s SB/SE division for the identified abusive tax scheme categories—Frivolous Returns, Frivolous Refunds, Abusive Domestic Trusts, and Offshore Schemes. The estimates include the number of taxpayers involved in each scheme category; in conjunction, the estimates report the amount of tax revenue protected by the identification of frivolous returns and refunds and the amount of potential revenue lost in abusive trusts and offshore schemes. To determine the actions IRS takes to identify and deal with abusive tax scheme activities, we interviewed officials from IRS’s SB/SE division and CI. Additionally, we obtained and reviewed relevant documents related to the organizations and programs involved in addressing IRS’s abusive tax scheme activities. To determine how IRS coordinates its actions with other federal agencies, we interviewed cognizant officials within CI and at several federal agencies. The federal officials we interviewed were from the FTC, the SEC, the Financial Crimes Enforcement Network (FinCEN), DOJ, the Federal Bureau of Investigation (FBI), and the Executive Office for United States Attorneys. You also expressed interest in IRS’s coordination efforts with state agencies. We are not reporting on this topic because many of the states contacted did not consider abusive tax schemes to be a major issue for them. Due to their limited resources, states focused on noncompliance issues that were more directly related to the types of taxes they collect such as sales and employment taxes. This review primarily focuses on abusive tax schemes that are used by individuals rather than those that are used by corporations. Although some overlap may exist between the schemes used by individuals and corporations, our focus was on schemes that are generally used by individuals to inappropriately reduce the taxes they owe or to generate refunds to which they are not properly entitled. We performed this work from July 2001 through April 2002 in accordance with generally accepted government auditing standards. Abusive tax schemes, which are generally used by individuals, fall into four major categories. For the first two, frivolous returns and frivolous refunds, taxpayers submit a tax return that either states an argument that IRS can readily identify as frivolous or a tax return with characteristics IRS has identified as reflecting a frivolous argument. For the other two, abusive domestic trusts and offshore schemes, taxpayers’ returns are less likely to reveal the use of a clearly abusive tax scheme. These schemes generally use any number of anti-tax arguments to incorrectly claim that income is exempt from taxation or that IRS otherwise lacks authority needed to tax income. These arguments have been well litigated in the courts and consistently ruled to be without merit. Examples include the following: Form 2555 Scheme: In this scheme, individuals file an IRS Form 2555, Foreign Earned Income, and claim that their income was not earned within the United States. This is also known as the “not a citizen” argument in which taxpayers file returns stating they are citizens of the “Republic of ” and not citizens of the United States, and thus, their income is not taxable. Section 861: Individuals using this scheme claim that under Internal Revenue Code Section 861 income tax must only be paid on foreign income and, therefore, their income is not subject to tax or withholding. In these cases, taxpayers file a tax return and show a zero amount for wages. According to IRS, this argument has spread to some employers who are using it to avoid withholding and paying payroll- type taxes on their employees. According to IRS, credit and refund abusive tax schemes are designed to substantially reduce taxes or create a refund for the taxpayer, generally by claiming eligibility for a credit that does not exist or to which the taxpayer is not properly entitled. One such scheme that has received much attention is the Slavery Reparation Refund scheme. According to IRS, promoters circulate or publish information claiming African Americans are eligible for slavery reparations. Taxpayers claiming this credit generally enter a significant amount on their tax return as a credit that results in a taxpayer realizing a refund if not detected by IRS. A trust is a legitimate form of ownership that completely separates asset responsibility and control from the benefits of ownership. As such, trusts are commonly used in matters such as estate planning. An abusive domestic trust scheme usually involves a taxpayer creating a trust that does not meet the Internal Revenue Code requirements that the assets and income of the trust not be subject to the control of the taxpayer. Once such an improper trust is established and the taxpayer has transferred business or personal assets to it, the scheme may involve further abuses, such as offsetting income of the trust by overstating its business expenses or including the taxpayer’s personal expenses—like a home mortgage—as an expense of the trust. The taxpayer will often use multiple entities such as partnerships, limited liability companies, or secondary level trusts that can be tiered or layered to mask the taxpayer’s continued ownership or control of the trust’s income or assets. Abuses that involve foreign locations can take a wide array of forms and attempt to use a number of techniques to improperly avoid paying taxes. One common technique is simply to use foreign locations to add another level of complexity in obscuring the true ownership of assets or income and thus obfuscating whether taxes are owed and by whom. Use of foreign locations, for instance, can be combined with use of trusts to make unraveling the true ownership of assets and income more difficult for IRS. According to IRS, criminals long have used offshore schemes to disguise the true nature of their enterprise and the resulting income. Promoters of abusive tax schemes have, according to IRS, increasingly devised schemes that in some fashion involve transferring income or title to assets to foreign locations. Often, foreign locations are selected because they are tax havens with little or no taxation on income in their jurisdiction, have privacy rules that help schemers hide what they are doing, or have other characteristics favorable to carrying out the schemes. According to IRS, once such transfers are established, income is often repatriated back to the U.S. owners through loans, credit cards, or debit cards. By using complex transactions and multiple entities, the individuals using these schemes attempt to hide their income and avoid potential tax liabilities. Of the approximately 130 million individual income tax returns filed annually yielding about a trillion dollars in revenue, approximately 740,000 taxpayers used abusive tax schemes in tax-year 2000 according to IRS estimates. According to IRS’s fiscal year 2003-2004 Small Business and Self-Employed (SB/SE) Division Strategic Assessment Report, abusive tax schemes represent a rapidly growing risk to the tax base. IRS estimates the potential revenue loss from these schemes to be in the tens of billions of dollars annually. According to an IRS official, to make accurate estimates in this area of noncompliance is difficult. Despite the difficulties in accurately estimating the significance of abusive tax schemes, IRS provided us with estimates in four major scheme areas— Frivolous Returns, Frivolous Refunds, Abusive Domestic Trusts, and Offshore Schemes. According to IRS, its estimates were made in February 2002 and were derived from information gathered during tax return processing and examination activities and from the work of IRS’s Criminal Investigation (CI), the law enforcement arm of IRS. According to an IRS official, these estimates were derived from tax-year 2000 information, the last full year for which data were available. IRS’s estimates are as follows: Frivolous returns: about 62,000 taxpayers with associated tax amounts approximating $1.8 billion. Frivolous refunds: about 105,000 taxpayers with associated tax amounts approximating $3.1 billion. Abusive domestic trusts: about 65,000 taxpayers with tax losses approximating $2.9 billion. Offshore schemes: about 505,000 taxpayers with tax losses ranging from $20 billion to $40 billion. IRS’s estimates for the numbers of taxpayers and taxes in connection with frivolous returns and frivolous refunds, although not precise, likely have less uncertainty than its estimates of the numbers of taxpayers and taxes at risk in connection with abusive domestic trusts and offshore schemes. IRS’s estimates for frivolous returns and frivolous refunds are based in large part on returns and refund claims that IRS has identified while processing tax returns and has addressed by pulling the associated returns and notifying the taxpayers that their returns contained a frivolous position that needed to be corrected by submitting a revised return. Thus, in these cases, IRS has a fairly direct basis for counting the number of taxpayers involved and the amount of tax involved. Furthermore, because IRS has pulled these returns from processing, in general, improper refund claims have not been paid out, and IRS is pursuing collection of the proper amount of tax when taxpayers have failed to pay the full amount owed. In contrast, although taxpayers using domestic trusts and offshore schemes may file tax returns, those returns alone seldom provide enough information for IRS to determine whether an abusive scheme was used. Therefore, IRS’s estimates of the numbers of taxpayers and the taxes at risk for the domestic trust and offshore scheme categories generally rely on limited numbers of cases that have been examined or investigated, on intelligence obtained in the course of normal tax administration and CI activities, and on IRS officials’ professional judgments. Recognizing that offshore transactions are a significant factor in offshore schemes, IRS has been taking steps concerning the use of credit/debit cards issued by offshore banks to U.S. taxpayers. Although having an offshore credit card is not illegal, IRS believes that some U.S. taxpayers are using such cards to evade U.S. taxes. In October 2000, a federal judge authorized IRS to serve “John Doe” summonses on American Express and MasterCard to obtain limited information on U.S. taxpayers holding credit cards issued by banks in several tax-haven countries. On the basis of information received from MasterCard, IRS identified about 235,000 accounts issued through 28 banks located in 3 countries. IRS’s ongoing analysis of these data leads it to estimate that between 60,000 and 130,000 U.S. customers are associated with these 235,000 accounts. In part because MasterCard is estimated to have about 30 percent of this market, IRS estimates that there could be 1 to 2 million U.S. citizens with credit/debit cards issued by offshore banks. However, this is a very preliminary estimate. IRS officials believe this estimate may be reduced because, among other things, a portion of these accounts may not be associated with abusive tax schemes. By comparison, only about 117,000 individual taxpayers indicated that they had offshore bank accounts in tax-year 1999. On March 25, 2002, IRS petitioned for permission to serve a summons on VISA International, seeking records on transactions using cards issued by banks in over 30 tax-haven countries. According to an IRS official, a judge granted the summons on March 27, 2002. In May 2002, IRS is scheduled to meet with Visa International to discuss delivery of the information requested in the summons. In addition, on April 24, 2002, the Treasury Department’s FinCEN published an interim final rule to define and provide guidance to operators of credit card systems concerning a provision in the Bank Secrecy Act that requires them to establish anti-money laundering programs. Part of the justification for the rule was the potential for utilizing a credit card system to access in the United States funds located in foreign financial institutions. The successful “John Doe” summonses filed against MasterCard and American Express by IRS were cited by FinCEN as support for publishing the interim rule. The Treasury Department is concerned that foreign financial institutions located in tax-haven countries are being used to violate and/or evade domestic tax requirements, among other things. Recognizing that, the Secretary of the Treasury submitted a report to Congress on April 26, 2002, proposing several actions to improve compliance regarding foreign bank and financial accounts. These actions are geared to improving reporting, compliance, and enforcement efforts related to U.S. citizens and residents transacting business with foreign financial institutions. The estimates of the number of individuals and dollar consequences associated with offshore credit/debit card schemes are very uncertain at this time. Nevertheless, IRS’s February 2002 estimate of $20 billion to $40 billion in tax dollars at risk from offshore schemes may grow as IRS learns more about the extent of the problem. No one individual or office could provide an agencywide perspective on IRS’s strategy, goals, objectives, performance measures, or program results, for its efforts to address abusive tax schemes. Consequently, a clear and consistent picture of IRS’s efforts was difficult to obtain. Available information indicates that IRS began increasing its efforts to combat abusive schemes over the past 2 or 3 years, continued to do so in 2001, and plans further future efforts. Limited data also suggest that these enhanced efforts have helped IRS convict more promoters and users of abusive schemes over the past 3 years. IRS has also increased its education and publicity as a way to deter and control the use of abusive tax schemes. Organizationally, IRS identifies and deals with schemes in two primary ways—during its processing and examination of tax returns (compliance and enforcement) and through the work of CI. Therefore, most of IRS’s programs to address abusive schemes are the responsibility of IRS’s SB/SE division and CI. IRS has taken a number of steps to enhance its compliance and enforcement efforts—its audit and other civil enforcement activities—that focus on abusive tax schemes. In the past year, for example, IRS has increased staff years devoted to examining abusive tax scheme promoters, decided to assign and train about 50 more agents to promoter examinations, and laid plans for assigning 200 or more additional staff to reviewing abusive tax schemes and offshore compliance schemes. Furthermore, IRS has created an organization that initially will focus on developing leads and cases related to abusive scheme promoters and that will monitor abusive promoter web sites. IRS has also recognized that its resources are stretched to improve compliance and enforcement efforts and these multiple efforts require organizational coordination. IRS identifies many abusive tax schemes during its normal tax return processing and examination activities. For example, when tax returns initially are processed either manually or by computers, processes are in place to detect apparent frivolous returns or returns reflecting improper refunds. In these cases, the returns are pulled from processing to be forwarded to a unit that specializes in addressing these types of returns. Both the Wage and Investment (W&I) and SB/SE divisions in IRS process taxpayers’ tax returns and both have responsibilities for identifying tax returns that may involve abusive tax schemes. Three principal SB/SE division efforts focusing on or related to abusive tax schemes are the Frivolous Return Program, the Office of Flow-Through Entities and Abusive Tax Schemes, and the National Fraud Program. The Frivolous Return Program identifies the tax returns of individuals who assert unfounded legal or constitutional arguments and refuse to pay their taxes or to file a proper tax return. The program also identifies returns claiming frivolous refunds, such as those involving slavery reparations. Generally, IRS provides guidance to those who process tax returns to identify the characteristics of returns claiming such frivolous arguments or refunds. IRS also has programmed its computers to do so. The Treasury Inspector General for Tax Administration (TIGTA) helped IRS develop software programs to identify slavery reparation schemes. Since both W&I and SB/SE staff process tax returns, both divisions are involved in identifying such returns. Once identified, the returns are pulled out of the tax return processing stream and forwarded to the Frivolous Return Program unit where they are to be resolved with the taxpayer. The program was consolidated in January 2001, at the Ogden, Utah, Compliance Services Center. The compliance center staff enters information about each case into a database and assigns 1 of 31 different codes identifying the frivolous argument or refund being claimed by the taxpayer. Then, a notice requesting taxpayers to file a proper tax return is to be sent advising them that IRS has judged their tax return to include an argument that is without legal merit or a credit or tax refund to which they are not entitled. IRS officials state that the number of staff assigned to the Frivolous Return Program unit in Ogden grew from 18 employees in September 2000 to 45 employees in September 2001. Some of this increase may not reflect a net IRS-wide increase in full-time equivalents (FTE) for frivolous returns since the increase has, in part, been due to centralizing efforts in Ogden from other IRS locations. IRS officials expect to assign more employees to this program in fiscal year 2003. The Office of Flow-Through Entities and Abusive Tax Schemes became operational in January 2000. Flow-through entities include domestic trusts and offshore trusts and partnerships. These are flow-through entities because their income “flows through” to their partners or other beneficiaries, where it is subject to taxation. The office was created to organize IRS’s efforts in addressing abusive tax schemes, particularly trusts, and to identify their promoters and sellers. The unit’s goals are (1) to catalog and profile schemes and trends, (2) direct compliance resources to examine schemes and promoters or refer tax scheme promoters and participants for criminal prosecution, (3) increase employee knowledge and skills related to abusive tax scheme issues, and (4) enhance coordination within IRS on issues related to abusive tax schemes. IRS expects to assign and train about 50 revenue agents this fiscal year to focus mainly on promoters of abusive tax schemes. The agents are to undergo training during the summer of 2002 and to begin examining cases by the fall of 2002. According to IRS, the number of abusive promoter leads increased from 25 in March 2001 to 155 in February 2002. In addition, the number of abusive promoter cases approved for further examinations has increased from 17 cases to 94 cases during the same period. The time spent on these cases is also increasing. IRS also reports that time spent on promoter examinations for fiscal year 2002 is expected to be 12.1 staff years, which is up from 4.4 and 1.2 staff years in fiscal year 2001 and fiscal year 2000, respectively. Furthermore, IRS plans additional expansion of its abusive tax scheme compliance efforts. For example, IRS expects to develop units that will include 8 to 10 agents in each of 15 locations. These units will address abusive tax schemes and flow-through entities. In addition, given the growing significance of the offshore credit/debit card schemes, IRS plans to create four special enforcement groups. Each group will be staffed by approximately 8 agents and will concentrate on these offshore schemes. This growth in staffing reflects IRS’s increased priority for these schemes. IRS officials expect that the agents assigned to these units will be redirected largely from other compliance areas. Schedule K-1 Transcription and Matching. In the spring of 2001, the transcription of Schedule K-1 information became a major responsibility of the Office of Flow-Through Entities and Abusive Tax Schemes. According to IRS, information provided on Schedule K-1s is important for determining whether recipients of flow-through income have properly reported that income on their tax returns. IRS can use transcribed data for information matching to determine whether proper reporting of income occurred. IRS believes that flow-through entities such as trusts and partnerships are increasingly being used in abusive tax schemes. IRS can also use these K-1 data in its return examination and tax collection activities to help identify abusive tax schemes. Tax-year 1995 marked the last year that Schedule K-1 information was transcribed by IRS. From 1990 through 1995, IRS transcribed approximately 5 percent to 12 percent of the Schedule K-1s received. After 1995, IRS did not transcribe Schedule K-1 information submitted with paper returns nor did it match the income information contained on the schedules with the information presented on individual beneficiaries’ or partners’ tax returns. IRS again started to transcribe tax-year 2000 K-1 information during the spring of 2001 and completed the process in December 2001. IRS officials told us that the matching of the K-1 information against individual tax returns was to begin in March 2002. IRS cites several reasons for reinstating its transcription and matching of Schedule K-1s. First, IRS has observed a significant increase in flow- through entities. The number of tax returns filed by trusts, partnerships, and S-corporations has increased by 12 percent, 33 percent, and 35 percent, respectively, over the 6-year period from fiscal years 1995 through 2000. IRS also estimates an overall increase of nearly 2 million such returns by 2009. Second, based on a small study, in January 2002, IRS estimated that between 6 percent and 15 percent of total flow-through income would not be reported on tax-year 2001 returns. IRS estimates that income of about $1 trillion was distributed to taxpayers from flow-through entities for tax-year 2000. Third, IRS expects its Schedule K-1 matching program not only to identify underreporting or nonreporting of income but also to improve taxpayer compliance. Transcription and matching of Schedule K-1 data are expected to increase accurate reporting of trust income on future tax returns just as matching of wage, interest, and other types of income has increased the accuracy of taxpayers’ tax returns. As a result, the Schedule K-1 program places taxpayers who receive flow- through income on a more equal footing with taxpayers who are wage earners. Lead Development Center. IRS has adopted a strategy of identifying promoters of tax schemes as a key to halting their promotion and identifying those who have taken advantage of the scheme and thus likely owe taxes. The SB/SE division is currently developing plans and strategies for a Lead Development Center. The center’s primary functions will be to develop case leads and assemble case information for distribution to compliance field offices for further investigation. Initially, the center will focus on abusive tax scheme promoters, and over time, it will expand to perform similar functions for fraud and anti-money laundering cases. The center will also operate a computer laboratory that, among other things, is expected to monitor possible abusive promoter sites on the Internet. Specifically, the laboratory will support examiners in their case- development needs, including adequately capturing and documenting information from web sites in a manner suitable for introduction into evidence in a court proceeding. The center also will serve as a coordinating link among various IRS groups that deal with abusive tax scheme issues and with outside stakeholders such as DOJ, FTC, and others. The National Fraud Program, which operates at IRS’s campuses and field offices, coordinates efforts and provides oversight to IRS’s compliance efforts to identify potential tax fraud. In addition, the program helps identify trends and disseminates the information within IRS and acts as a liaison on fraud cases involving bankruptcy and employment and excise taxes among other types of tax fraud. A National Fraud Program manager sets overall policy and program direction. Fraud managers are located in five area offices, and they oversee the activities of about 65 fraud referral specialists. These specialists assist other IRS revenue compliance staff in identifying cases with fraud potential, determining when indications of fraud are present, and developing potential cases. They also review fraud cases for technical accuracy and adequacy of supporting documentation to ensure appropriate and consistent application of fraud program guidelines and requirements. Cases where criminal activity is involved are referred to CI. According to the Commissioner of Internal Revenue, illegal tax schemes place a major demand on IRS resources. A complex illegal offshore trust case could require several times as many hours as a typical exam, and thousands of these cases are emerging. IRS is now beginning to gather data that will better enable it to estimate the magnitude and nature of the offshore credit and debit card schemes. Improved data will help IRS identify how many and what types of resources it may need to address the schemes. However, the evasive nature of these schemes may necessitate face-to-face audits in a significant portion of cases to determine whether taxes are owed and the amount owed. Even if the number of individuals involved in these schemes is a fraction of the reported estimate of 1 to 2 million, IRS’s staff may be challenged to audit them and maintain its current audit coverage as well. IRS’s face-to-face audits have been declining, decreasing from nearly 400,000 in fiscal year 1999 to nearly 200,000 in fiscal year 2001. Accordingly, IRS has begun considering whether other techniques than audits could be used to resolve these cases. For example, IRS is considering options such as disclosure initiatives, settlement initiatives, and self-correction programs. These techniques will need to be tested and refined to determine which, if any, are effective. The increased scope of abusive tax schemes has also led IRS to develop an improved process for selecting the best cases to pursue among the many that it identifies, develop a new policy to govern simultaneous criminal and civil enforcement investigations of taxpayers, consider how to ensure that increased volumes of scheme-related tax assessments are followed up by IRS’s collection function when taxpayers are unable to pay in full, and use its internal research group and a contractor to develop better models for identifying indicators that taxpayers may be participating in abusive tax schemes. In addition, a significant organizational change has just been implemented in the SB/SE division that is intended to increase program oversight and coordinate programs and units dealing with abusive tax schemes and related tax fraud activities. To that end, in the past few weeks, the SB/SE division has divided its Office of Flow-Through Entities and Abusive Tax Schemes. Now, its efforts to ensure accurate reporting of income connected to flow-through entities will fall under a director for reporting compliance. IRS separated the flow-through entity effort from other abusive tax scheme efforts because it judged that the flow-through effort is more related to its traditional information-matching and examination programs than to its abusive scheme efforts. The flow-through effort will, however, also provide useful information for IRS to use elsewhere in investigations of abusive schemes. The rest of IRS’s SB/SE division’s major programs and efforts that are more directly focused on abusive tax schemes—the National Fraud Program, the Abusive Tax Schemes Program, the Lead Development Center, and the Anti-Money Laundering Program—have been placed under a single executive for reporting enforcement. Monitoring the Internet and other media outlets where abusive tax schemes often are advertised will also be part of this centralized effort. IRS’s Criminal Investigation (CI) investigates and pursues promoters and sellers of abusive schemes and the individuals using such schemes. CI’s role is the enforcement of the tax laws for individuals who willfully fail to comply with their obligation to file and pay taxes and who ignore IRS’s collection and compliance efforts. The most flagrant cases are recommended for criminal prosecution. CI also administers the Questionable Refund Program that focuses on stopping the payment of various false tax refunds and, if warranted, on prosecuting the taxpayers involved. Furthermore, CI develops education and publicity activities warning taxpayers about abusive tax schemes and placed public information officers (PIO) in the field to specifically generate publicity regarding IRS’s law enforcement efforts. CI’s enforcement strategy as it relates to fraudulent tax schemes is to focus primarily on the promoters of these schemes and on taxpayers who willfully use these schemes to evade taxes. For example, during a tax scheme investigation, CI generally attempts to gain access to a fraudulent promoter’s list of clients to whom the promoter sold the scheme. In addition to pursuing the promoter, CI can then use the list of clients to determine who may have used the abusive scheme. CI determines which users of the abusive scheme merit investigation for possible prosecution and which users merit referral to IRS operating divisions for possible compliance and civil enforcement action. Although CI has data on enforcement activity related to several types of tax scams (e.g., related to employment tax, refunds, return preparers, nonfilers, and domestic and foreign trusts), CI only separately tracked its promoter efforts for domestic and foreign trusts. (See table 1.) CI officials said that the number of full-time equivalent staff working on domestic and foreign trusts increased from 55 in fiscal year 1999 to 69 in fiscal year 2001. Although no consistent pattern exists across all of the categories in table 1, CI has had increases in the number of convictions obtained over the 3- year period. For purposes of deterring individuals from engaging in abusive trusts, the increasing numbers of convictions has provided IRS an opportunity to publicize more cases in which individuals have been found guilty. Furthermore, the increases in indictments and convictions of promoters may help deter promoter activity in particular. Because the investigative and legal processes can span several years, data like those in table 1 do not show whether the cases investigated lead to prosecutions, convictions, or indictments in that same year. Furthermore, the data do not account for differences in the importance of cases, such as whether major fraudulent efforts are being successfully investigated and closed. IRS data do show that the average length of sentence for the abusive domestic and foreign trust program rose substantially from 35 months in 1999 to 64 months in 2001. To the extent that average length of sentence relates to the severity of the crime, IRS may be making headway in pursuing key abusive trust cases. The Questionable Refund Program (QRP), administered by CI, was established in 1977. The QRP was designed to identify false returns, stop the payment of false refunds, and prosecute scheme perpetrators. Various false refund schemes are pursued under this program, including ones involving the earned income tax credit, the fuel tax credit, social security refund schemes, and slavery reparations. Questionable Refund Detection Teams (QRDT), located at IRS compliance service centers, conduct preliminary pre-refund reviews of questionable returns identified through manual and computerized screening techniques. False return schemes meeting criminal prosecution criteria are referred to field offices for possible criminal investigation while returns with questionable civil issues are referred to the appropriate IRS compliance or collection group. CI’s efforts also include informing and educating the public about abusive tax schemes and publicizing the results of its enforcement activities related to such schemes. CI has been particularly active in trying to disseminate information to the public to make them aware of IRS’s activities and accomplishments in combating abusive tax schemes. In addition, CI has PIOs located across the country who work with local media to publicize IRS’s efforts and results. CI Education and Publicity Activities. CI’s education and publicity activities focus on warning taxpayers about fraudulent tax schemes so that they will not be tempted to use such schemes. CI hopes that increasing media coverage of successful tax scheme prosecutions will deter the public from participating in tax schemes because the perceived risk of detection, prosecution, and resulting penalties and sanctions will be too high. In addition, CI officials believe that publicizing the prosecutions of promoters and users of tax schemes helps assure the public that people are paying their fair share of taxes. CI posted its web page (www.ustreas.gov/irs/ci) on the Internet in September 1997. According to CI officials, over the past 2 years, the Internet site has evolved into an important tool for educating and alerting the public about tax schemes and about CI’s efforts to detect and deal with those who promote and use tax schemes. The Internet site provides fraud alerts warning the public of schemes where promoters are targeting unsuspecting taxpayers; information on topics including tax filing responsibilities, nonfilers, and abusive tax return preparers; summaries of cases and successful prosecutions of promoters and users of fraudulent schemes; and press releases and other IRS publications to generate a wide public distribution. Tax practitioners also are targets of CI’s publicity strategy. According to CI officials, some tax practitioners are using IRS’s materials directly from the Internet site to inform those clients who may believe that a given tax scheme is legal. For example, clients may ask the tax practitioner to set up a fraudulent trust to reduce their taxes, and the tax practitioner can simply print the brochure about “Too Good to be True? – Trusts” from CI’s Internet site to discourage the taxpayers from using such a trust. In conjunction with using the Internet site as an informational tool to educate and warn the public of frivolous schemes, CI has taken steps to increase IRS’s visibility and presence on the Internet. According to CI, it has recently intensified its efforts to improve the ranking of IRS’s web page through the use of “metatags” or keyword tags. By doing so, IRS seeks to have Internet users who enter various terms in available Internet search engines find IRS’s web page listed near the top of displayed search results. For example, CI is planning to add tags such as “pay no tax,” and “form 1040” so that entering these terms will result in CI’s Internet site being listed in the displayed search results. CI is pursuing other possible strategies to ensure that CI’s site rises to the top of Internet search responses. For example, CI staff has occasionally visited known promoter Internet sites to gather information on keywords used by those sites. IRS plans to incorporate those keyword tags into its Internet site. As a result, IRS expects to increase the odds that the CI Internet site would be included alongside Internet sites that promote questionable tax avoidance strategies. In addition, CI is working to create a web content manager position with responsibilities that include designing a strategy to maximize the potential of CI’s Internet site. The manager would be responsible for helping to integrate CI data into the pages in IRS’s Internet site that provide information to specific types of taxpayers. CI Public Information Officers. In October 2000, CI established PIOs in each of IRS’s 35 field offices. The PIOs serve as points of contact for all internal and external CI communications initiatives, including the issuing of press releases and the coordination of important law enforcement media events. Although IRS has other media relations specialists located in its field offices, their duties tend to focus on publicizing tax filing season information, including the benefits of electronic filing. CI PIOs generate publicity specific to IRS’s law enforcement activities, including the detection and prosecution of abusive tax schemes. Primary functions of the PIOs include establishing contacts with editors, reporters, and news directors to educate them on tax issues and provide information about IRS and CI to enable them to write in-depth articles. encouraging media to include more stories on the detection and prosecution of abusive tax schemes. getting articles included in trade and professional journals and magazines that are read frequently by professionals such as doctors, lawyers, and accountants to make them aware of abusive tax schemes. developing a local media strategy. Part of CI’s local strategy involves generating a “hook” to get the stories focused more on communities. In addition, CI has employed a strategy of “bundling” news stories. For example, CI has been working cases on fraud involved in the restaurant industry. Once several such cases have been put together, CI will bundle these stories together into a single news story for possible publication in magazines and journals read by people in the restaurant industry. giving speeches and participating in a wide variety of presentations, panel discussions, and conferences with professional organizations, including the American Bar Association, the American Institute of Certified Public Accountants, and the American Medical Association, to create public awareness of CI’s activities and to provide information about fraudulent tax schemes. IRS works with various federal agencies in its efforts to identify and deal with fraudulent tax schemes. These include the FTC, the SEC, DOJ, FBI, and the United States Attorneys Offices (USAO). In some cases, IRS’s coordination is on an informal basis, as it is with the FTC and the SEC, and involves the sharing of certain information and detection techniques. In other cases, the relationship is more formal, as in the case with DOJ or USAOs, which prosecute fraud and other tax-related cases with the assistance of IRS staff. IRS officials participate in various federal agency working groups, including a multiagency task force to share information, skills, and procedures for combating fraud on the Internet; an IRS and DOJ working group created to examine the use of civil injunctions against abusive promoters currently under criminal investigation; and a money-laundering- experts working group. According to the officials we interviewed, these working groups are invaluable for developing networking relationships between agencies, which facilitate information sharing among staff. IRS staff also attends quarterly meetings with staff from the FTC, SEC, and DOJ to develop joint initiatives to combat Internet fraud. These meetings have spawned other activities for IRS staff, including FTC-sponsored training seminars and periodic visits to FTC’s Internet laboratory to discuss strategies and share information and techniques to combat Internet fraud. IRS also meets regularly with DOJ officials to discuss strategies for seeking injunctions and shutting down web sites that promote abusive tax schemes. According to a DOJ official, DOJ has received about 14 case referrals from IRS for injunctions in the past year or so. These cases have resulted in two permanent and two preliminary injunctions. The rest of the cases are in various stages of the judicial process. Furthermore, 10 of these 14 cases relate to the marketing of various types of abusive tax schemes and 4 involve shutting down web sites that promote abusive schemes. IRS has long-standing programs and related efforts aimed at detecting and dealing with abusive tax schemes, particularly those related to frivolous tax returns and fraudulent tax refund claims. Recently, IRS has begun to take a more assertive and coordinated approach to detecting and dealing with an ever-changing array of abusive tax schemes, including those involving the use of domestic and offshore trusts. In the past year, IRS has added more resources to these efforts, created new programs, and improved others, and it is reorganizing its operations. Furthermore, based on the limited data available, IRS appears to be realizing some increased success in convicting those involved in schemes, publicizing these results, and uncovering previously hidden major offshore compliance problems. Nevertheless, it is difficult to get a clear picture of all that is underway in IRS—how much is new as opposed to reemphasized or reorganized, and how the pieces combine to form a planned, coordinated effort with specific, defined outcomes. No central office, group, or executive could provide us with an agencywide focus or perspective on IRS’s strategy, goals, objectives, performance measures, or program results. Responsibility for the efforts was spread across various functions and groups within IRS. To some extent, this lack of clarity is not surprising given the fairly rapid and ongoing change in IRS’s efforts, the expanding scope of the problem, and the difficulty in determining the difference between what is legitimate, aggressive tax planning and an abusive tax scheme. IRS has recognized that its resources will be stretched to deal with these often complex schemes and that its multiple, enhanced efforts need to be better integrated. In an attempt to bring this integration to fruition, IRS’s SB/SE division is reorganizing to place key efforts to combat abusive schemes under one executive. A centralized focal point should enhance IRS’s ability to manage its efforts to reduce the prevalence and magnitude of abusive tax schemes. On May 16, 2002, we received written comments on a draft of this report from the Commissioner of Internal Revenue (see appendix). The commissioner agreed with the conclusions in the report. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Finance and the House Committee on Ways and Means and the Chairman of the Subcommittee on Oversight, House Committee on Ways and Means. We are also sending copies to the Secretary of the Treasury; the Commissioner of Internal Revenue; and other interested parties. We will make copies available to others on request. If you have any questions or would like additional information, please call me at (202) 512-9039 or Joseph Jozefczyk at (202) 512-9053. Key contributors to this report are Marvin McGill, Jay Pelkofer, Grace Coleman, and Kathleen Seymour.
The Internal Revenue Service (IRS) characterizes an abusive tax scheme as any plan or arrangement created and used to obtain tax benefits not allowable by law. According to IRS, abusive tax schemes fall into four categories: frivolous returns, frivolous refunds, abusive domestic trusts, and offshore schemes. IRS estimates the potential revenue loss from abusive tax schemes to be in the tens of billions of dollars annually. Developing accurate estimates is difficult because of the limited numbers of cases examined and investigated. IRS identifies and examines abusive tax scheme promoters and participants through its Small Business and Self-Employed Division and Criminal Investigation. In fiscal year 2000, IRS created a program that focuses on false and frivolous schemes. IRS has also created new offices that focus exclusively on abusive tax schemes that use legal structures like domestic and offshore trusts and partnerships. IRS coordinates with federal agencies to identify, monitor, and prosecute promoters and participants in abusive tax schemes. These activities range from sharing information and detection techniques with agencies such as the Securities and Exchange Commission and the Federal Trade Commission to assisting in the prosecution of fraud related cases with the Department of Justice. IRS participates in work groups that share information, skills, and procedures. These work groups discuss procedures for combating fraud on the Internet and the use of civil injunctions against promoters of abusive tax schemes.
The terms hurricane and typhoon are regionally specific names for a strong tropical cyclone. These storms are referred to as tropical depressions when sustained winds are less than 39 miles-per-hour (mph) and are given a name as a tropical storm if sustained winds exceed gale force of 39 mph up to 74 mph. Hurricanes are tropical cyclones with sustained winds that exceed 74 mph, which circulate counterclockwise about their centers in the Northern Hemisphere. The Atlantic hurricane season runs from June 1 through November 30 each year. On August 13, 2004, the third storm of the 2004 hurricane season, named Hurricane Charley, made U.S. landfall as a category 4 hurricane near Charlotte Harbor on the Gulf of Mexico side of the Florida peninsula with sustained winds of 150 mph. This hurricane was the strongest hurricane to hit the United States since the category 5 Hurricane Andrew in 1992. Hurricane Charley caused catastrophic wind damage across central Florida, with nine tornadoes reported on August 13 in association with the hurricane. As indicated in figure 1, the hurricane’s path moved northeast across central Florida, with the center passing near Orlando at 86 mph as it moved off the northeast coast near Daytona Beach and into the Atlantic Ocean. Charley then moved north along the South Carolina coast, making U.S. landfall again at North Myrtle Beach as a category 1 hurricane with sustained winds of 75 mph. The hurricane rapidly weakened to a tropical storm over North Carolina as it moved up the Atlantic coast. Hurricane Charley created rain, flooding, and seven tornadoes in North Carolina and Virginia, including category F1 tornado damage at Kitty Hawk, North Carolina. On August 15, Charley merged with a frontal zone in southeastern Massachusetts that eventually moved into Canada. Hurricane Charley was cited in two federal disaster declarations for Florida and South Carolina. It was responsible for 33 U.S. deaths according to the Red Cross and 34 U.S. deaths according to NOAA’s National Climate Data Center (NCDC). The Property Claims Service and the Insurance Information Institute, both of which provide insurance damage information, estimated damage to insured property that averaged $7.2 billion and an equal amount for uninsured damages. With total damage estimated at about $15 billion, Charley became the third costliest hurricane in U.S. history through 2005. The sixth storm of the 2004 hurricane season, named Hurricane Frances, reached peak intensity as a category 4 hurricane with sustained winds of 144 mph out in the Atlantic Ocean as it passed north of the U.S. Virgin Islands on August 31, 2004. Frances was downgraded to a category 2 hurricane with sustained winds of around 100 mph by the time it made U.S. landfall near Sewall’s Point, 35 miles north of West Palm Beach on the east coast of Florida on September 5. As indicated in figure 1, Frances moved northwest across central Florida and became a tropical storm as it moved into the Gulf of Mexico near New Port Richey, Florida, on September 6. Later that day it again made U.S. landfall on the Florida panhandle and moved west into eastern Alabama and western Georgia. It weakened to a tropical depression on September 7, and moved into West Virginia early on September 9. Hurricane Frances produced gale force winds as it briefly accelerated northeast across New York and into northern New England until dissipating over the Gulf of St. Lawrence on September 10. Frances caused widespread rains and flooding over much of the eastern United States and a total of 101 tornadoes were reported in association with this hurricane, with 56 occurring in the Carolinas on September 7. Hurricane Frances was cited in five federal disaster declarations for Florida, Georgia, North Carolina, Pennsylvania, and South Carolina. It was responsible for 45 U.S. deaths according to the Red Cross and 38 U.S. deaths according to NOAA/NCDC. The American Insurance Services Group, an insurance trade association, estimated damage to insured property at $4.43 billion and an equal amount for uninsured damages. With total damage estimated at $8.9 billion, Frances became the fifth costliest hurricane in U.S. history through 2005. The ninth storm of the 2004 hurricane season, named Hurricane Ivan, reached category 5 strength three times in the course of its journey across the Atlantic into the Gulf of Mexico, with sustained winds as high as 167 mph. From September 5 to 12, Ivan caused considerable property damage and loss of life primarily in Grenada and Jamaica as it passed through the Caribbean Sea. Ivan weakened to a category 3 hurricane with sustained winds of 121 mph when it made U.S. landfall just west of Gulf Shores, Alabama, on September 16. As indicated in figure 1, Ivan moved across central Alabama where it weakened to a tropical storm and moved northeast as far as the Delmarva Peninsula on September 18. However, even as a weak tropical storm, Ivan produced considerable rain and flooding, spawning 113 tornadoes across the southeastern United States, as depicted in figure 2. This included two category F2 tornadoes in Florida that resulted in five deaths, and 61 tornadoes reported on September 17. Ivan then ceased its northeast movement and over the next 3 days made a large loop and moved southwest along the eastern U.S. coast, crossing Florida back into the Gulf of Mexico on September 21. There Ivan completed its loop and became a tropical storm, making landfall in southwestern Louisiana on September 24. Later that day, it dissipated over the upper Texas coast after completing a storm track more than 5,600 miles long. Hurricane Ivan was cited in nine federal disaster declarations for Alabama, Florida, Georgia, Louisiana, Mississippi, New Jersey, New York, North Carolina, and Pennsylvania. It was responsible for 63 U.S. deaths according to the Red Cross and 52 U.S. deaths according to NOAA/NCDC. The American Insurance Services Group estimated insured property damage at $7.1 billion and an equal amount for uninsured damages. With total damage estimated at $14.2 billion, Ivan became the fourth costliest hurricane in U.S. history through 2005. The 10th storm of the 2004 hurricane season, named Hurricane Jeanne, was a tropical storm as it moved over the U.S. Virgin Islands on September 14, 2004, and produced heavy rain over Puerto Rico on September 15, with sustained winds of 69 mph. Jeanne became a category 1 hurricane over the Dominican Republic with sustained winds of 81 mph that resulted in an estimated 3,000 deaths in Haiti from torrential rainfall, mudslides, and flooding on September 17. Jeanne made U.S. landfall at Port St. Lucie on the east coast of Florida, very near where hurricane Frances made U.S. landfall on September 5, as a category 3 hurricane with sustained winds of 121 mph on the morning of September 26. As indicated in figure 1, Jeanne moved westward across central Florida but quickly became a tropical storm by the time it reached Tampa. It further weakened to a tropical depression as it moved northward, dumping heavy rains across central Georgia and into the Carolinas, Virginia, and the Delmarva Peninsula. On September 29, Jeanne merged with a frontal zone that dissipated eastward into the Atlantic. Hurricane Jeanne was cited in five federal disaster declarations for the U.S. Virgin Islands, Puerto Rico, Florida, Virginia, and Delaware. It was responsible for 13 U.S. deaths, according to the Red Cross, and 28 U.S. deaths, according to NOAA/NCDC. The American Insurance Services Group estimated damage to insured property at $3.44 billion and an equal amount for uninsured damages. With total damage estimated at $6.9 billion, Jeanne became the seventh costliest hurricane in U.S. history through 2005. The Red Cross activated its preparedness and disaster relief operations before hurricane Charley made landfall, and remained on the ground providing critical emergency services through the three subsequent hurricanes. The Red Cross stated that this effort, through the 2004 hurricane season, resulted in the largest hurricane relief operation in its 123-year history. From mid-August through mid-October 2004, the Red Cross reported that it had established over 1,800 shelters that housed almost 425,000 people displaced by the hurricanes, provided over 11 million meals and snacks to hurricane victims and emergency workers, and provided more than 149,000 comfort kits and 113,000 cleanup kits. The Red Cross stated that this effort involved over 35,000 workers, of whom 90 percent were volunteers, to set up shelters, provide transportation, support the sheltering and feeding effort, and distribute supplies. As required by statute and regulation, the Red Cross receives an annual audit of its consolidated financial statements, including a schedule of expenditures of federal awards. The Red Cross is required to have its activities, including a complete, itemized report of all receipts and expenditures, audited by the Secretary of Defense through the U.S. Army Audit Agency pursuant to 36 U.S.C. 300110; Department of Defense Directive 1330.5; and Army Regulation 930-5. The Red Cross is also subject to the audit requirements of the Single Audit Act and OMB Circular No. A- 133, Audits of States, Local Governments, and Non-Profit Organizations. The Red Cross contracted with an independent public accounting firm, KPMG, to conduct a financial audit of its consolidated financial statements, as well as an audit of its schedule of expenditures of federal awards, for the fiscal year ended June 30, 2005. To fulfill its audit responsibilities, avoid duplication and unnecessary expense, and make the most efficient use of available resources, the U.S. Army Audit Agency reviewed KPMG’s work and reports. According to its report of October 21, 2005, the U.S. Army Audit Agency found nothing during its review to indicate that KPMG’s unqualified opinion on the Red Cross’s 2005 consolidated financial statements was inappropriate or could not be relied on. For the Red Cross single audit, the Department of Health and Human Services serves as the cognizant federal agency on behalf of all participating federal agencies under the Single Audit Act audit process. The cognizant agency will review the audit report submitted by KPMG, determine if additional workpaper review is necessary, and coordinate a management decision for KPMG findings and questioned costs. The awarding federal agency has 6 months from receipt of the report to assess any audit findings and questioned costs and issue management’s decision. Known questioned costs are those specifically identified by the auditor. Likely questioned costs are projected based upon an error rate of transactions tested. Resolution of questioned costs may be by repayment; financial adjustments; or other actions, such as changing procedures. The objectives of our audit were to determine whether (1) FEMA and the Red Cross established criteria for allowable reimbursable expenses related to hurricanes Charley, Frances, Ivan, and Jeanne; (2) Red Cross reimbursements did not duplicate funding paid by other federal programs; (3) Red Cross reimbursable claims were paid only for services in states and territories declared eligible for disaster relief; and (4) Red Cross reimbursable claims were paid only for allowable categories of services and support and were supported by adequate documentation. To determine whether there were established criteria for allowable reimbursable Red Cross expenses, we initially reviewed a draft agreement between FEMA and the Red Cross in March 2005. This draft agreement outlined the operating definitions and the proposed approach for federal reimbursement of Red Cross disaster relief, emergency services, and recovery expenditures associated with hurricanes Charley, Frances, Ivan, and Jeanne. We then participated in a series of meetings with representatives of the Red Cross, FEMA, DHS’s Office of Inspector General, KPMG, and OMB. The purpose of these meetings was to refine the criteria to be consistent with FEMA and Red Cross policies and procedures for disaster relief and Public Law 108-324. The final agreement was signed by FEMA and Red Cross officials in May 2005. To ensure that Red Cross reimbursements did not duplicate funds paid by other federal programs, we examined the FEMA and Red Cross agreement that provided that the Red Cross would not request reimbursement for any expenses paid by other federal funding sources. As the primary federal funding source for disaster assistance, FEMA identified any payments it had made to the Red Cross and reconciled amounts to reimbursement requests to ensure that its federal funds were not duplicated. We reviewed this identification and reconciliation process, conducted discussions with FEMA and Red Cross officials, and reviewed for indications of other federal funding during our audit of Red Cross reimbursement requests. To determine if Red Cross reimbursable claims were paid only for services in states and territories declared eligible for disaster relief, we reconciled 21 federal disaster declarations for hurricanes Charley, Frances, Ivan, and Jeanne to the FEMA and Red Cross agreement. We also identified states where federal disaster declarations were issued as a result of severe flooding caused by these four hurricanes. However, the Red Cross, as a not- for-profit organization, is not limited by federal disaster declarations and can provide emergency assistance where it determines there is a need. As a result, we identified several other states where Red Cross incurred some hurricane-associated expenses related to the four hurricanes that were part of the FEMA/Red Cross agreement. To determine whether reimbursable claims were paid only for allowable categories of services and support, and whether those claims were supported by adequate documentation, we reviewed audit work performed by the public accounting firm of KPMG. The Red Cross hired KPMG to perform an entitywide audit of its consolidated financial statements, including all of its federal awards in accordance with OMB Circular No. A- 133, Audits of States, Local Governments, and Non-Profit Organizations. In order to avoid duplication of audit work, we reviewed KPMG’s Single Audit Act audit of the Red Cross for the fiscal year ended June 30, 2005, that contained $88.6 million of incurred expenses and $28.1 million of net reimbursements to be paid from federal funds that we were mandated to audit under Public Law 108-324. We relied on KPMG’s work on the Red Cross’s internal controls and tests of transactions, retested 10 percent of 741 transactions sampled by KPMG, and performed other audit tests as we deemed necessary. We performed our work from March 2005 through March 2006 in accordance with generally accepted government auditing standards. We suspended our work from October 2005 through February 2006 because KPMG was waiting for support from an expanded test of client assistance debit cards from Red Cross chapter offices in the Gulf States affected by hurricanes Katrina and Rita. We provided a draft copy of this report to the American Red Cross and to FEMA for their review and comment on April 14, 2006, with a follow-up copy to DHS on May 11, 2006. We received written comments from Red Cross in a letter dated May 1, 2006, which is reprinted in its entirety in appendix I of this report. According to DHS, FEMA officials have been actively preparing for the 2006 Atlantic Hurricane Season and had no comments on the draft report. We found that FEMA and the Red Cross had properly established criteria for Red Cross reimbursement requests through a May 2005 agreement that identified allowable categories for disaster relief, recovery, and emergency services associated with hurricanes Charley, Frances, Ivan, and Jeanne. This agreement also included definitions of eligibility, listed 18 states and 2 territories where the Red Cross had incurred expenses associated with the four hurricanes, and established administrative procedures for Red Cross reimbursement requests and subsequent payment by federal appropriated funds. Allowable categories for disaster relief, recovery, and emergency services under the May 2005 FEMA/Red Cross agreement consisted of (1) mass care, (2) client personal living needs, (3) client housing needs, (4) client health needs, (5) direct service delivery support, and (6) operational support. These categories are discussed in more detail below. Mass care covered relief supplies and services provided for or distributed to disaster victims and emergency workers that included food, supplies, and expendable equipment to provide mass feeding and shelter operations. An example of Red Cross sheltering efforts is depicted in figure 3. clothing, medical and other supplies, and expendable equipment intended for bulk distribution or used in mass care activities, such as comfort kits, cleanup kits, blankets, lanterns, camp stoves, ice chests, and cots; food, water, ice, fuel, and other consumable supplies for bulk shipping and storage for safekeeping of household goods; sanitation projects, mass immunization, emergency first aid, and supplies used in shelters, first aid stations, service centers, or other Red Cross facilities; and food, transportation, and other services provided to emergency workers. The Red Cross reported that from mid-August through mid-October 2004, it had established over 1,800 shelters that housed almost 425,000 people displaced by the hurricanes, provided over 11 million meals and snacks to hurricane victims and emergency workers, and provided more than 149,000 comfort kits and 113,000 cleanup kits. Client personal living and housing needs covered relief supplies and service given to individuals and families to meet immediate living necessities or to operate households, such as living needs, including food, water, clothing, toilet articles, household supplies, laundry and dry cleaning, storage containers, bedding and linens, cribs and baby items, and coolers to store food, and housing needs, including accommodations in commercial facilities, rent and security deposits, utilities, and emergency repairs to make residences temporarily habitable. The Red Cross reported that more than 330,000 homes were damaged by hurricanes Charley, Frances, Ivan, and Jeanne, with more than 27,000 homes completely destroyed, including the one shown in figure 4. From mid-August through mid-October 2004, the Red Cross stated that it had helped more than 73,000 individuals and families in determining their needs, developing recovery plans, and providing financial aid. Client health needs covered relief supplies and services provided to disaster victims on an individual or family basis to provide for physical and mental health benefits, such as hospital, ambulance, X ray, and laboratory charges; eyeglasses, dentures, hearing aids, and artificial limbs; prescriptions, over-the-counter medication, and first aid supplies; special dietary, housing, or mobility devices; blood and blood products; and burial or cremation expenses. From mid-August through mid-October 2004, the Red Cross reported that volunteer nurses helped over 46,000 people with physical needs, and trained mental health professionals made over 78,000 contacts with people in need to begin the recovery process. Direct service delivery support covered expenses associated with the delivery of disaster services to disaster victims and workers. Emergency response vehicles provided mobile relief sites to distribute hot meals, water, and snacks to hurricane victims, as depicted in figure 5. Direct service delivery support included salaries, travel, and maintenance of disaster staff assigned to provide mass care, family living and housing, and client health services; purchase, rental, repair, and service of nonexpendable equipment and vehicles used to provide direct services and services in shelters and facilities; telephone and related communications equipment; rent, repair, and operating expenses of facilities used to provide direct shipping, freight, and handling expenses; and other miscellaneous expenses. The Red Cross stated that over 35,000 workers, 90 percent of whom were volunteers, were involved in setting up shelters, providing transportation, supporting the sheltering and feeding effort, and distributing supplies. Operational support covered expenses of managing and administering a disaster relief operation that included salaries, travel, and maintenance of disaster staff assigned to functions that support direct service to disaster victims and workers such as administration, records and reports, accounting, public affairs, logistics, training, staffing, local disaster volunteers, communications, computer operations, and liaison functions with other entities; rental, repair, and service of nonexpendable equipment, computer equipment, and vehicles used to provide management and administration; telephone and related communications equipment for field, district, and rent, repair, and operating expenses of facilities used to provide management and administration; printing, copy, postage, and delivery expenses; and meeting expenses and activity expenses to recognize volunteer efforts. Consistent with the law, the May 2005 FEMA/Red Cross agreement provided that the Red Cross would not seek reimbursement for any expenses reimbursed by other federal funding sources. We identified about $0.3 million of FEMA paid transient accommodations and deployment costs that were properly deducted by the Red Cross from its reimbursement requests, so as not to duplicate funding by other federal sources. The Red Cross also deducted from the reimbursements $60.2 million of private donations designated for disaster relief related to the four hurricanes. As the primary federal funding source for disaster assistance, FEMA established internal controls to identify any payments it made to the Red Cross. It subsequently reconciled these amounts to Red Cross requests for reimbursement to ensure that amounts were deducted so that federal funds were not used to reimburse Red Cross expenses more than once. During our audit, we reviewed this identification and reconciliation process and conducted discussions with FEMA and Red Cross officials. We did not identify any evidence of other federal funding during our audit of the Red Cross reimbursements. The Red Cross reported $88.6 million of incurred expenses in states and territories eligible for disaster relief under the four hurricanes in accordance with the FEMA/Red Cross agreement. This included $3.1 million for general relief for a call center, FEMA transient accommodations, and other recovery expenses not specifically identified with any particular one of the four hurricanes. Twenty-one federal disaster declarations were made by the President and issued by FEMA for hurricanes Charley (2), Frances (5), Ivan (9), and Jeanne (5), which cumulatively covered federal disaster aid to 12 states and 2 territories. Federal disaster declarations were also issued for four additional states— Ohio, Tennessee, Vermont, and West Virginia—as a result of severe flooding caused by these hurricanes. However, the Red Cross, as a not-for-profit organization funded primarily through private donations, is not limited by federal disaster declarations and can provide assistance where it determines there is a need. As a result, the FEMA/Red Cross agreement included some hurricane-associated expenses incurred in Arkansas and Texas that were used by Red Cross as staging areas for emergency relief aid to the areas affected by the four hurricanes. The agreement also included Red Cross assistance in Maryland, which was affected by flooding caused by Hurricane Ivan but was not covered by a federal disaster declaration. From August 11, 2004, through June 30, 2005, the Red Cross reported incurred expenses of $88.6 million in connection with hurricanes Charley, Frances, Ivan, and Jeanne. These expenses are presented by hurricane in figure 6. The reported $88.6 million of Red Cross incurred expenses for the four hurricanes by state and U.S. territory are presented in table 1. The May 2005 agreement signed by FEMA and the Red Cross identified 18 states and 2 territories as eligible for reimbursement from the 2004 hurricane appropriation. The agreement did not include Delaware, which was eligible for federal disaster assistance, because the Red Cross stated it incurred no expenses in the state. The 12 states and 2 territories presented in bold in table 1 were determined to be eligible for federal disaster relief in declarations by the President and FEMA that specifically cited hurricanes Charley, Frances, Ivan, or Jeanne. Another 4 states—Ohio, Tennessee, Vermont, and West Virginia—were declared eligible for federal disaster relief as a result of severe flooding caused by the four hurricanes, even though none of the hurricanes were specifically mentioned in the federal disaster declarations. Arkansas and Texas were not declared eligible for federal disaster relief, but the Red Cross stated that it had incurred some staging expenses there, although its expenses in Texas were included with its expenses for Louisiana. Maryland was not declared eligible for federal disaster relief, but the Red Cross provided some assistance there when heavy rains generated by hurricane Ivan caused flooding of the Susquehanna River in Port Deposit, Maryland, in September 2004. The state of New York expenses incurred by the Red Cross were included as general relief, which included a centralized call center and other recovery costs not specifically identified with any particular one of the four hurricanes. The Red Cross’s requested reimbursements of $28.1 million related to the four 2004 hurricanes were included in a schedule of $50.0 million of federal funds from other programs operated by the Red Cross for the fiscal year ended June 30, 2005. Since the Red Cross expends more than $500,000 annually of federal awards, it is required by the Single Audit Act to obtain an annual audit. The Red Cross’s entitywide financial statements and a schedule of expenditures of federal awards for the fiscal year ended June 30, 2005, were audited by the public accounting firm of KPMG in accordance with OMB Circular No. A-133, Audits of States, Local Governments, and Non-Profit Organizations. In order to not duplicate audit efforts, we reviewed and tested the audit work of KPMG related to the reported $88.6 million of Red Cross incurred expenses for the four 2004 hurricanes. In its Single Audit Act audit, KPMG determined that Red Cross expenses were generally incurred for eligible disaster services and supported by adequate documentation. We concur with that determination. However, KPMG identified six weaknesses in the Red Cross’s internal controls related to the reimbursement for the four 2004 hurricanes that it considered to be reportable conditions. KPMG also considered one of these, related to debit cards for client assistance, to be a material weakness. In its report, KPMG made recommendations to the Red Cross to strengthen internal controls related to these six reportable conditions, with which the Red Cross concurred. The report also identified about $712,000 of known questioned costs related to the federal share of the 2004 hurricane program identified through audit sampling that were caused by (1) a bank reporting error of $657,000 on client assistance debit cards and (2) missing or incomplete documentation of $55,000 to support incurred expenses. The Red Cross reported $88.6 million of incurred expenses related to the four 2004 hurricanes for the period August 11, 2004, through June 30, 2005. These expenses, less other federal funds and private donations received, were submitted to FEMA for reimbursement from federal appropriated funds provided under Public Law 108-324, as indicated in table 2. As indicated in table 2, the reported $88.6 million of Red Cross incurred expenses were reduced by $0.3 million of other federal funds and $60.2 million of private donations that resulted in a net reimbursement amount of $28.1 million. We reviewed KPMG’s Single Audit Act audit work on the Red Cross’s internal controls and tests of transactions, and we retested 10 percent of its 741 sample transactions of Red Cross expenses related to the four 2004 hurricanes. We found that we could rely upon the KPMG audit work. In conducting its audit, KPMG identified six reportable conditions in internal controls, the first of which KPMG also determined to be a material weakness. These conditions are discussed in more detail below. One method used by the Red Cross to provide financial assistance to disaster victims is the client assistance card, which is a MasterCard® branded debit card with client assistance amounts determined by on-site caseworkers. The cards were introduced on a large scale basis for the first time during the 2004 hurricane response in Florida. The cards are preferred by the Red Cross in part because of their acceptance by merchants, reduced paperwork, and the flexibility afforded to disaster clients. An internal authorizing approval document (Red Cross Form 1030) is used to issue a card to individual clients following initial casework. Individual card spending limits are determined by the caseworker’s assessment of the client’s immediate needs for housing, food, transportation, and other personal expenses, with a current maximum card balance of $5,000. Card spending limits can be restored after they are consumed if the caseworker determines additional client need exists. Most cards were “cash-enabled,” meaning they could be used to withdraw cash from any ATM. To limit certain risks of unauthorized card use, specific merchant codes are blocked by the card program’s bank administrator in order to prevent the card from being used at locations that principally sell goods such as alcohol and tobacco and at certain luxury retail outlets. KPMG conducted a monetary unit statistical sample of 336 transactions on 334 individual debit cards from a population of about 40,000 debit cards that generated $24.0 million of transactions. The Red Cross could not locate Form 1030s from local Red Cross chapters to support the debit card authorization for 23 of the 334 cards. An additional 17 Form 1030s did not have evidence of appropriate Red Cross approval signatures. For an additional 3 card transactions that KPMG tested outside the monetary unit sample, the Red Cross could not locate documentation to support the transactions. As a result of these exceptions, KPMG identified questioned costs of $55,334 because of missing support or approval signatures related to these 43 transactions. Various reports are available to the Red Cross from the bank issuing the client assistance debit cards to assist in monitoring the cards. These included reports showing amounts “loaded” for the authorized spending limit onto new and existing cards each month, as well as spending reports detailing amounts, dates, and merchants for all card charges. KPMG reported that the Red Cross did not have a procedure in place to reconcile the total amounts authorized by the casework process to the amounts actually loaded onto the cards. In one case identified in a Red Cross internal audit, the amount authorized by supporting casework was $41.26 but a debit card was erroneously loaded with an authorized limit of $4,126.00, all of which was spent. KPMG questioned the excess difference of $4,085. Additionally, while determining the population of client assistance card transactions for audit testing, KPMG identified an amount of $657,619 that the Red Cross could not initially resolve. This amount was the difference between detail amounts reported by the issuing bank based upon card transactions and summary amounts the bank reported to the Red Cross for the debit card program. The Red Cross used the amounts from the bank summary reports to record entries to its general ledger and to recognize debit card expenses for the 2004 hurricane program. No reconciliation between these two bank reports had been performed by the Red Cross following the 2004 hurricane season. The bank investigated the difference and discovered that its summary program reports were erroneously capturing duplicate cardholder information if a card was assigned to a cardholder more than one time. This caused an overstatement in the summary of debit card amounts, although the bank’s detail transaction reports on cards were correct. As a result of this bank error, KPMG identified questioned costs of $657,619. The Red Cross subsequently reduced its final request to FEMA for the 2004 hurricane reimbursement to $28.1 million after adjusting for the amount of the bank reporting error. Client assistance cards were introduced by the Red Cross shortly before the 2004 hurricane season. During the season, its Disaster Operations unit was responsible for monitoring transactions on client assistance cards issued by Red Cross headquarters in Washington, D.C. This unit was to review reports showing the amounts loaded onto cards for duplicate names and cards as well as for unusual balances. Any questionable transactions were to be referred to the Family Services unit for further investigation. However, no standard procedures were developed for monitoring cards until May 2005, when a one-page procedure provided examples of questionable card activity that should be pursued. This included reviewing card usage that exceeded normal authorized dollar limits, investigating multiple cards issued to individuals with the same or similar names, and spot-checking for other unusual data. This guidance also did not specify how identified suspicious transactions will be referred to the compliance units within the Red Cross, or require that follow-up or other ultimate resolution of questionable card activity be documented. The frequency and depth of Red Cross monitoring activities was unclear, and those activities were not routinely documented as to follow-up and resolution of any transactions referred to the Family Services unit. KPMG therefore considered this to be a reportable condition, with no questioned costs identified. Another method used by the Red Cross to provide financial assistance to disaster clients is the disbursing order (DO). A DO is a hard copy Red Cross form prepared by a caseworker that describes the specific goods or services to be provided to the disaster client, the name of the merchant to provide the goods or services, and the authorized dollar limit of the expenditure. The DO is first signed by both the caseworker and the disaster client. After providing the described goods or services to the client, the merchant submits the DO to the Red Cross for reimbursement. The DO is to be signed by both the merchant and the disaster client, who acknowledges receipt of the goods or services. KPMG judgmentally selected and tested 133 DOs from a population of $32.3 million that had been paid to merchants. KPMG found that 6 DOs were not signed by either the disaster client or the merchant and did not contain sufficient supporting documentation, such as an invoice or merchant receipt, which might provide alternative documentation that the goods were received by the intended client. KPMG accepted either signature as substantiation that the goods were delivered, though Red Cross procedure is to obtain both signatures. In addition, for another 2 DOs tested, each amount reimbursed to the merchant was equal to the DO’s authorized expenditure limit, which was greater than the actual expense requested by the merchant. As a result of these exceptions, KPMG identified questioned costs of $2,405 because of missing signatures and $4 because of excess merchant reimbursement. From August 2004 through March 2005, the Red Cross negotiated a master billing contract with a national retailer. The retailer was to centrally gather all DOs honored at its retail locations for reimbursement, and invoice the Red Cross for the sum of all such DOs on a monthly basis. Prior to payment to the merchant, the Red Cross reconciled the invoices to the underlying DOs, based on the information provided by the retailer. Approximately $2 million of financial assistance was provided to disaster clients through this individual merchant billing arrangement. The contract was discontinued in March 2005 after the bulk of client assistance had been provided. During a test of the Red Cross reconciliation process, KPMG could not match charges of $85,588 reimbursed to the merchant to any supporting DOs. The Red Cross investigated these differences as they were identified by KPMG, but was unable to resolve them. These unsupported expenses were included in the pool of costs subject to reimbursement and appear to be the only unmatched charges under the master billing arrangement based on a review of a sample of other monthly invoices and related reconciliations. As a result of these unsupported differences, KPMG identified questioned costs of $85,588. While providing disaster assistance, the Red Cross incurs other expenses related to managing the overall response effort, such as those for supplies, staff travel, and other logistics expenses. Most of these expenses for large- scale disasters are procured and paid for through the Red Cross National Shared Services Center, while other expenses may be incurred at the Red Cross chapter level and subsequently reimbursed by Red Cross national headquarters. KPMG judgmentally selected and tested 120 expense transactions incurred for other than client financial assistance from a population of $32.6 million. For 9 transactions primarily related to staff travel and other staff expenses, the Red Cross could not find supporting documents. As a result of these exceptions, KPMG identified questioned expenses of $21,526 because of missing support. Individuals and families requesting financial assistance are required to provide identification showing that they resided within the disaster- affected area at the time the disaster struck. During disaster response operations, Red Cross case workers interview affected clients to determine their eligibility and assess the type and level of assistance to best meet the clients’ need. For each client, a case file is opened, and a Red Cross standard assistance Form 901 is completed by the caseworker to document eligibility. The form is then signed by the caseworker to indicate that he or she has concluded that the individual or family is eligible for a specified level of financial assistance. For the 2004 hurricane season, some, but not all, of the case files were entered after the fact into a Red Cross database known as the Client Assistance System. KPMG judgmentally selected and tested 60 case files from the Client Assistance System. The Red Cross could not find 7 of the case files, although for 3 files, other records were found to corroborate eligibility. For 2 other case files, there was no evidence of caseworker signature, and for another case file there was insufficient information to document eligibility. KPMG identified questioned costs of $2,415 because of the 4 missing case files and the 1 case file with insufficient documentation. Based on its audit testing, KPMG identified about $712,000 of known questioned costs and $0.9 million of likely questioned costs related to the federal share of the 2004 hurricane program in its Single Audit Act audit of the Red Cross through June 30, 2005. These questioned costs are shown by reportable condition in table 3. In addition to known questioned costs that are specifically identified by the auditor, OMB Circular No. A-133 requires the auditor to consider the best estimate of total costs questioned (likely questioned costs). Based upon the percentage rate of known errors in the statistical monetary unit sample of client assistance cards, KPMG projected an error rate to the entire population of $24.0 million using a 96 percent confidence level and identified total likely questioned costs of about $2.9 million with 32 percent, or about $926,000, related to the federal share, as indicated in table 3. However, that projection is a statistical extrapolation and therefore is not supported by detailed exceptions within the total population. The Red Cross reduced its final request for FEMA reimbursement by the amount of the bank reporting error of $657,619, but did not reduce its reimbursement request for the other $171,000 of known questioned costs that included 32 percent, or about $55,000, related to the federal share. The Department of Health and Human Services, which serves as the cognizant federal agency for the Red Cross under the Single Audit Act audit process, will review the KPMG report and coordinate a management decision for the auditor’s findings and questioned costs. The awarding federal agency has 6 months from receipt of the report to assess any audit findings and questioned costs and issue management’s decision. Corrective action should also be initiated within 6 months after receiving the audit report and proceed as rapidly as possible. We received written comments from the American Red Cross Executive Vice President and Chief Financial Officer on a draft of this report. The Red Cross agreed with the report content, including the six weaknesses identified by KPMG, and stated that it is taking steps to strengthen its policies, procedures, and practices to remedy these weaknesses. It also described steps being taken in anticipation of the 2006 Atlantic Hurricane Season, which runs from June 1 through November 1, 2006. We have reprinted the American Red Cross comments in their entirety in appendix I. FEMA had no comments on the draft report. We are sending copies of this report to Senate Committee on Homeland Security and Governmental Affairs and the House Committee on Government Reform. We are also sending copies of this report to the Under Secretary of Emergency Preparedness and Response in the Department of Homeland Security responsible for FEMA, the Inspector General for the Department of Homeland Security, the Director of OMB, and the Vice President of Finance at the American Red Cross. Copies of this report are available to other interested parties on request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. Should you or your staffs have any questions concerning this report, please contact me at (202) 512-3406 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were Roger R. Stoltz, Assistant Director; Patricia A. Summers; and Eric S. Huff.
In accordance with Public Law 108-324, GAO is required to audit the reimbursement of up to $70 million of appropriated funds to the American Red Cross (Red Cross) for disaster relief associated with 2004 hurricanes Charley, Frances, Ivan, and Jeanne. The audit was performed to determine if (1) the Federal Emergency Management Agency (FEMA) established criteria and defined allowable expenditures to ensure that reimbursement claims paid to the Red Cross met the purposes of the law, (2) reimbursement funds paid to the Red Cross did not duplicate funding by other federal sources, (3) reimbursed funds assisted only eligible states and territories for disaster relief, and (4) reimbursement claims were supported by adequate documentation. The 2004 hurricane season was one of the most destructive in U.S. history. Fifteen named storms resulted in 21 federal disaster declarations. Four hurricanes affecting 19 states and 2 U.S. territories from August 13 through September 26, 2004, triggered the nation's biggest natural-disaster response up to that time. Over 150 deaths and $45 billion of estimated property damage are attributed to hurricanes Charley, Frances, Ivan, and Jeanne in the United States alone. Through 2005, these four storms rank among the seven costliest in U.S. history. The signed agreement between FEMA and the Red Cross properly established criteria for the Red Cross to be reimbursed for allowable expenses for disaster relief, recovery, and emergency services related to hurricanes Charley, Frances, Ivan, and Jeanne. The Red Cross incurred $88.6 million of allowable expenses. Consistent with the law, the agreement explicitly provided that the Red Cross would not seek reimbursement for any expenses reimbursed by other federal funding sources. GAO identified $0.3 million of FEMA paid costs that the Red Cross properly deducted from its reimbursement requests, so as not to duplicate funding by other federal sources. The Red Cross also reduced its requested reimbursements by $60.2 million to reflect private donations for disaster relief for the four hurricanes, for a net reimbursement of 28.1 million. Red Cross expenses were incurred in states and territories eligible for disaster relief associated with the four hurricanes in accordance with the FEMA/Red Cross agreement. The Red Cross requested reimbursement of $28.1 million for the period August 11, 2004, through June 30, 2005, for payment from federal appropriated funds under Public Law 108-324. After review and some retesting, GAO relied upon audit work conducted by the CPA firm of KPMG, LLP, which determined that most Red Cross expenses were incurred for eligible disaster services and were supported by adequate documentation. However, KPMG identified six weaknesses in the Red Cross's internal controls related to expenses incurred for the four hurricanes and reported $712,000 of known questioned costs, with which Red Cross concurred. The Red Cross also concurred with the content of the GAO report.
In 1985, the Congress passed Public Law 99-145 directing the Army to destroy the U.S. stockpile of lethal chemical agents and munitions. The stockpile consists of rockets, bombs, projectiles, spray tanks, and bulk containers, which contain nerve and mustard agents. It is stored at eight sites in the continental United States and on Johnston Atoll in the Pacific Ocean. Appendix III identifies the locations of the chemical stockpile storage sites. To comply with congressional direction, the Army established the Chemical Stockpile Disposal Program and developed a plan to incinerate the agents and munitions on site in specially designed facilities. In 1988, the Army established the Chemical Stockpile Emergency Preparedness Project (CSEPP) to help communities near the chemical stockpile storage sites enhance existing emergency management and response capabilities in the unlikely event of a chemical stockpile accident. Recognizing that the stockpile program did not include all chemical warfare materiel requiring disposal, the Congress directed the Army in 1992 to plan for the disposal of materiel not included in the stockpile. This materiel, some of which dates back to World War I, consists of binary chemical weapons, miscellaneous chemical warfare materiel, recovered chemical weapons, former production facilities, and buried chemical warfare materiel. Appendix IV identifies the storage locations for the nonstockpile chemical warfare materiel. In 1992, the Army established the Nonstockpile Chemical Materiel Program to dispose of the materiel. Appendix V provides a chronology of the Army’s chemical disposal programs. In 1993, the United States signed the U.N.-sponsored Chemical Weapons Convention. In October 1996, the 65th nation ratified the convention making the treaty effective on April 29, 1997. If the U.S. Senate approves the convention, it could affect implementation of the disposal programs.Through ratification, the United States will agree to dispose of its (1) unitary chemical weapons stockpile, binary chemical weapons, recovered chemical weapons, and former chemical weapon production facilities by April 29, 2007, and (2) miscellaneous chemical warfare materiel by April 29, 2002. If a country is unable to maintain the convention’s disposal schedule, the convention’s Organization for the Prohibition of Chemical Weapons may grant a one-time extension of up to 5 years. Under the terms of the convention, chemical warfare materiel buried before 1977 is exempt from disposal as long as it remains buried. Should the United States choose to excavate the sites and remove the chemical materiel, the provisions of the convention would apply. The Senate has not approved the convention, however, the United States is committed by public law to destroying its chemical stockpile and related warfare materiel. In prior reports, we expressed concern about the Army’s lack of progress and the rising cost of the disposal programs. Appendix VI provides a listing of our products related to these programs. In 1991, we reported that continued problems in the program indicated that increased costs and additional time to destroy the chemical stockpile should be expected. We recommended that the Army determine whether faster and less costly technologies were available to destroy the stockpile. In a 1995 report on the nonstockpile program, we concluded that the Army’s plans for disposing of nonstockpile chemical warfare materiel were not final and, as a result, its cost estimate was likely to change. In July 1995, we testified before this subcommittee that the Army had experienced significant cost growth and delays in executing its stockpile disposal program and that further cost growth and schedule slippages could occur. In 1996, we reported that efforts to enhance emergency preparedness is Alabama had been hampered by management weaknesses in CSEPP. The stockpile program will likely exceed its $12.4 billion estimate and take longer than the legislative completion date of December 2004. This is because reaching agreement on site-specific disposal methods has consistently taken longer than the Army anticipated. Public concerns about the safety of incineration have (1) resulted in additional environmental requirements, (2) slowed the permitting of new incinerators, and (3) required the Army to research disposal alternatives. Approximately $1 billion of the estimated $12.4 billion is associated with CSEPP. The cost estimate for CSEPP has increased because of delays in the stockpile program and longstanding management weaknesses. These weaknesses have also slowed the program’s progress in enhancing emergency preparedness. Since 1985, the Army’s cost estimate for the stockpile disposal program has increased seven-fold, from an initial estimate of $1.7 billion to $12.4 billion, and the planned completion date has been delayed from 1994 to 2004. Although the Army is committed to destroying the stockpile by the legislatively imposed deadline of December 31, 2004, it is unlikely to meet that date. Only two of the nine planned disposal facilities are built and operating, 4 percent of the stockpile has been destroyed, and environmental permitting issues at the individual sites continue to delay construction of the remaining facilities. For example, since the Army developed the most recent cost and schedule estimate in February 1996, the plant construction schedule has slipped by 6 months at the Anniston Army Depot, 9 months at the Pine Bluff Arsenal, 10 months at the Pueblo Depot Activity, and 4 months at the Umatilla Depot Activity. Predicting the disposal schedule for the various sites is difficult. According to Army officials, this is partly due to the uncertainty of the time required to satisfy changing environmental requirements. For example, although based on federal requirements, individual state environmental requirements differ and are occasionally changed. In most cases, these changes have added unanticipated requirements, resulting in the need for additional data collection, research, and reporting by the Army. In addition, according to the Army, the original scope of the health risk assessment to operate the disposal facilities was not completely defined, the health assessment requirements have changed, and the requirements currently vary from state to state. According to DOD officials, states have modified the requirements of their health risk assessments well into the process, delaying the development of the final assessment document. Based on program experience, the Army’s 1996 schedule does not provide sufficient time for the Army to complete the environmental approval process. As a result, program delays past the mandated completion date of December 2004 are likely. For example, the schedule for the Anniston disposal facility includes a grace period of a month for any slippage in the construction, systemization, or operation to meet the legislative completion date of December 31, 2004. Although the Army estimated that the permit would have been issued by the end of September 1996, Alabama regulatory officials expect the permit to be issued in June or July 1997—a slippage of about 8 months in the schedule. This slippage will cause disposal operations at Anniston to extend to the middle of 2005. In the 1993 National Defense Authorization Act, the Congress directed the Army to report on potential technological alternatives to incineration. Consequently, in August 1994, the Army initiated a program to investigate, develop, and support testing of alternative disposal technologies for the two bulk-only stockpile sites—Aberdeen Proving Ground and Newport Chemical Activity. According to the National Research Council, the Army has successfully involved the state and the public in its alternative technology project for the two bulk-only stockpile sites, demonstrating the importance of public involvement to the progress of a program. The development of alternative disposal technologies for assembled chemical munitions provides the Army the mechanism for encouraging public involvement and establishing common objectives for the remaining disposal sites. In the 1997 National Defense Authorization Act, the Congress directed DOD to assess alternative technologies for the disposal of assembled chemical munitions. The act also directed the Secretary of Defense to report on the assessment by December 31, 1997. Similarly, the 1997 DOD Appropriations Act provided $40 million to conduct a pilot program to identify and demonstrate two or more alternatives to the baseline incineration process for the disposal of assembled chemical munitions. The act also prohibited DOD from obligating any funds for constructing disposal facilities at the Blue Grass Army Depot and Pueblo Depot Activity, until 180 days after the Secretary reports on the alternatives. Although the prohibition applies only to Blue Grass and Pueblo, public concerns about incineration may prompt state regulators at other locations to delay their final decisions to permit incinerators until the Secretary reports his findings. The Army’s and the Federal Emergency Management Agency’s (FEMA) joint management of CSEPP has not been effective in controlling the growth in program costs and achieving timely results. The Army’s current life-cycle cost estimate of $1.03 billion for the program has increased by 800 percent over the initial estimate of $114 million in 1988. The primary reasons for the cost increase are the 10-year slippage in the completion of the Chemical Stockpile Disposal Program and financial management weaknesses. Program management weaknesses have also contributed to the increase and resulted in slow progress in enhancing emergency preparedness in the 10 states and local communities near the chemical stockpile storage sites. Nine years after CSEPP’s inception, states and local communities still lack critical items for responding to a chemical stockpile emergency, including alert and notification systems, decontamination units, and personal protection equipment. Although the Army has responded to this criticism and taken actions in response to congressional direction to improve program management, the completion of these actions has been delayed by disagreements between Army and FEMA officials. For example, the Army is still working to respond to direction in the 1997 National Defense Authorization Act to report on the implementation and success of CSEPP Integrated Process Teams.Because of this and other differences regarding their roles and responsibilities, Army and FEMA officials have not reached agreement on a long term management structure for CSEPP. Through fiscal year 1997 the Congress has appropriated $221 million for the nonstockpile program. The Army estimates that it will require an additional $15 billion and nearly 40 years to complete the program. However, given the factors driving the program, it is uncertain how long the program will take or cost. The program is driven by the uncertainties surrounding buried chemical warfare materiel and unproven disposal methods. The Army estimates that it can dispose of binary weapons, recovered chemical weapons, former production facilities, and miscellaneous chemical warfare materiel within the time frames established by the Chemical Weapons Convention. Under the terms of the convention, chemical warfare materiel buried before 1977 is exempt from disposal as long as it remains buried. Although the Army estimates that buried chemical materiel accounts for $14.5 billion (95 percent) of the nonstockpile program cost, the Army is still exploring potential sites and has little and often imprecise information about the type and amount of materiel buried. Appendix VII identifies the potential locations with buried chemical warfare materiel. The Army estimated that it will take until 2033 to identify, recover, and dispose of buried nonstockpile materiel. Although Army officials are confident that the proposed disposal systems will function as planned, the Army needs more time to prove that the systems will safely and effectively destroy all nonstockpile materiel and be accepted by the affected states and communities. The Army’s disposal concept is based on developing mobile systems capable of moving from one location to the next where the munitions are remotely detoxified and the waste is transported to a commercial hazardous waste facility. Although the systems may operate in a semi-fixed mode, they are scheduled to be available for mobile use at recovered and burial sites after 1998. Environmental issues similar to those experienced in the stockpile program are also likely to affect the Army’s ability to obtain the environmental approvals and permits that virtually all nonstockpile activities require. Whether the systems are allowed to operate at a particular location will depend on the state regulatory agency with authority over the disposal operations. In addition, public acceptance or rejection of the mobile systems will affect their transportation plans and disposal operations. DOD and the Army have taken a number of steps to respond to congressional direction and independent reviews and improve their management and oversight of the stockpile and nonstockpile programs. These steps have included efforts to improve coordination with the public through an enhanced public outreach program, increase public involvement in the alternative technology program for the two bulk-only stockpile sites, and establish a joint CSEPP Army/FEMA team to coordinate and implement emergency preparedness activities. In December 1994, DOD designated the Army’s chemical demilitarization program, consisting of both stockpile and nonstockpile munitions and materiel, as a major defense acquisition program. The objectives of the designation were to stabilize the disposal schedules, control costs, and provide more discipline and higher levels of program oversight. In response to our recommendations and similar ones by the National Research Council, the Army initiated the Enhanced Stockpile Surveillance Program in 1995 to improve its monitoring and inspection of chemical munitions. On the basis of those activities, the Army estimates that the stockpile will be reasonably stable through 2013. The Army’s review of the stockpile disposal program has identified several promising cost-reduction initiatives, but the Army cannot implement some of the more significant initiatives without the cooperation and approval of state regulatory agencies. Army officials estimated that the initial cost-reduction initiatives, which are in various stages of assessment, could potentially reduce program costs by $673 million. The Army plans to identify additional cost-reductions as the stockpile program progresses. Recognizing the difficulty of resolving the public concerns associated with each individual disposal location, suggestions have been made to change the programs’ basic approach to destruction. For example, members of the Congress and officials from environmental groups and affected states and counties have suggested deferring plans for additional disposal facilities until an acceptable alternative technology to incineration is developed. Congressional members have also suggested consolidating disposal operations at a national or regional sites. In addition, officials of various DOD organizations have suggested destroying selected nonstockpile chemical warfare materiel in stockpile disposal facilities, establishing a centralized disposal facility for nonstockpile materiel, and modifying laws and regulations to standardize environmental requirements. Deferring disposal operations may eliminate much of the public concern that has influenced the current approach to destroying the chemical stockpile. According to Army officials, alternative technologies may not reduce costs or shorten disposal operations but are likely to be acceptable to a larger segment of the public than incineration. Given the current status of alternative technologies, the cost and schedule would remain uncertain and there would be a corresponding increase in the risk of an accident from continued storage of the munitions. Although the Army has been researching technological alternatives to incineration for chemical agents stored in bulk containers, only recently have research and testing demonstrated potentially effective alternatives. Currently, there is no proven alternative technology to incineration capable of safely and effectively destroying assembled chemical munitions. Consolidating disposal operations could reduce construction and procurement costs, but the required transportation of chemical munitions could be an insurmountable barrier. This option would extend the disposal schedule and result in increased risk not only from storage but also from handling and transportation. Although consolidating disposal operations could reduce estimated facility construction and operation costs by as much as $2.6 billion, the savings would be reduced by uncertain but potentially significant transportation and emergency preparedness costs. To help reduce costs, the Army would have to consolidate three or more stockpile sites, develop less expensive transportation containers, and control emergency response costs. In 1988, the Army and many in the Congress rejected transporting the chemical stockpile weapons to a national site or regional disposal sites because of the increased risk to the public and the environment from moving the munitions. DOD and Army officials continue to be concerned about the safety of moving chemical weapons and public opposition to transportation of the munitions has grown since 1988. Using the chemical stockpile facilities to destroy nonstockpile chemical materiel has the potential for reducing costs. Although selected nonstockpile items could be destroyed in stockpile disposal facilities, the 1986 DOD Authorization Act, and subsequent legislation, specifies that the chemical stockpile disposal facilities may not be used for any purpose other than the disposal of stockpile weapons. This legislative provision, in some cases, necessitates that the Army implement separate disposal operations for nonstockpile materiel along side of the stockpile facilities. In its 1995 implementation plan, the Army suggested that the stockpile disposal facilities could be used to process some nonstockpile weapons, depending on the location, the type of chemical weapon or materiel, and condition. Another method for destroying nonstockpile chemical materiel could be based on the use of a central disposal facility with equipment designed specifically for destroying nonstockpile materiel. Although a national disposal facility could reduce program costs, the legislative restrictions on the transportation of nonstockpile chemical material and the prevalent public attitude that such a disposal facility should not be located in their vicinity would be significant obstacles that would have to be resolved. Modifying laws and regulations to standardize environmental requirements could enhance both the stockpile and nonstockpile programs’ stability and control costs. The current process of individual states establishing their own environmental laws and requirements and the prevalent public attitude that the Army’s disposal facilities should not be located in their vicinity have been obstacles to the stockpile disposal program and are also likely to affect the nonstockpile program. For example, individual state environmental requirements differ, such as the number of required trail burns, and are occasionally changed. As a result, there are no standard environmental procedures and requirements for stockpile and nonstockpile disposal sites. According to the Army, establishing standardized environmental requirements for all disposal sites would enhance the programs’ stability. However, efforts to modify existing laws and regulations to standardize the environmental requirements for chemical weapons disposal would likely be resisted by the affected states and localities and environmental organizations. In summary, implementation of the disposal programs has been slowed due to the lack of consensus among DOD and the affected states and localities over the process to dispose of chemical munitions and materiel. Recognizing the difficulty of satisfactorily resolving the public concerns with the disposal of chemical munitions, suggestions have been made by members of the Congress, DOD officials, and others to change the Army’s basic approach to destruction. However, these suggestions create trade-offs for decisionmakers and would require changes in legal requirements. While our February report presented these suggestions, we did not take a position on them or the Army’s current approach given the associated policy and legislative implications. Rather, our report presented the suggestions in context of the trade-offs they present and noted that should the Congress decide to consider modifications or alternatives to the current approach, it may wish to consider the suggestions related to the creation of alternative technologies, consolidation of stockpile disposal operations, utilization of stockpile facilities for nonstockpile items, centralization of nonstockpile destruction, and standardization of environmental laws and requirements. In commenting on these suggestions, DOD said that it favored the Congress considering the ones to establish a centralized disposal facility for nonstockpile materiel and to modify laws and regulations to standardize environmental requirements for chemical weapons disposal. DOD recommended against consideration of the options to defer incineration plans, consolidate disposal operations, and to use stockpile facilities for destroying nonstockpile items. In addition, we believe that high-level management attention is needed to reach agreement on a long-term management structure for CSEPP that clearly defines the roles and responsibilities of Army and FEMA personnel. This concludes my statement, Mr. Chairman. I would be pleased to answer any questions that you or other members of the Subcommittee may have. The following tables show appropriation, obligation, and disbursement data for the disposal programs. Funding data for the Chemical Stockpile Disposal Program, Alternative Technology and Approaches Project, and Chemical Stockpile Emergency Preparedness Project are shown in tables I.1, I.2, and I.3, respectively. Funding data for the Nonstockpile Chemical Materiel Program are shown in table I.4. Obsolete or unserviceable chemical warfare agents and munitions were disposed of by open pit burning, land burial, and ocean dumping. The National Academy of Sciences recommended that ocean dumping be avoided and that public health and environmental protection be emphasized. It suggested two alternatives to ocean disposal: chemical neutralization of nerve agents and incineration of mustard agents. The Armed Forces Authorization Act (P.L. 91-441) required a Department of Health and Human Services review of any disposal plans and detoxification of weapons prior to disposal. It also limited the movement of chemical weapons. The Foreign Military Sales Act prohibited the transportation of U.S. chemical weapons from Okinawa, Japan, to the continental United States. The weapons were moved to Johnston Atoll in the Pacific Ocean. The Army tested and developed an incineration process and disposed of several thousand tons of mustard agent stored in ton containers at Rocky Mountain Arsenal. The Army disposed of nearly 4,200 tons of nerve agent by chemical neutralization at Tooele Army Depot and Rocky Mountain Arsenal. The process was problematic and not very reproducible, making automation difficult. The Army opened the Chemical Agent Munitions Disposal System at Tooele to test and evaluate disposal equipment and processes for chemical agents and munitions on a pilot scale. The Army decided to build the Johnston Atoll Chemical Agent Disposal System to dispose of its chemical M55 rocket stockpile. The Army used the Chemical Agent Munitions Disposal System to test and evaluate incineration of chemical agents and energetic materiel, and decontamination of metal parts and ton containers. An Arthur D. Little Corporation study for the Army concluded that using incineration, rather than neutralization, to dispose of the stockpile would reduce costs. The Army declared its stockpile of M55 rockets obsolete. The Army expanded its chemical disposal program to include the M55 rocket stockpile at Anniston Army Depot, Umatilla Depot Activity, and Blue Grass Army Depot. The Army expanded its chemical disposal program to include the M55 rocket stockpile at Pine Bluff Arsenal and Tooele Army Depot. The National Research Council endorsed the Army’s disassembly and high-temperature incineration process for disposing of chemical agents and munitions. It also recommended that the Army continue to store most of the chemical stockpile, dispose of the M55 rockets, and analyze alternative methods for disposing of the remaining chemical stockpile. The Army began construction of the Johnston Atoll Chemical Agent Disposal System. The DOD Authorization Act for Fiscal Year 1986 (P.L. 99-145) mandated the destruction of the U.S. stockpile of lethal chemical agents and munitions. It also required that the disposal facilities be cleaned, dismantled, and disposed of according to applicable laws and regulations. The DOD Appropriations Act for Fiscal Year 1987 (P.L. 99-500) prohibited shipments of chemical weapons, components, or agents to the Blue Grass Depot Activity for any purpose. Chemical Agent Munitions Disposal System operations were suspended as a result of a low-level nerve agent release. The Army issued the Final Programmatic Environmental Impact Statement for the Chemical Stockpile Disposal Program. The Army selected on-site disposal of the chemical stockpile because it posed fewer potential risks than transportation and off-site disposal. (continued) The National Defense Act of Fiscal Year 1989 (P.L. 100-456) required the Army to complete operational verification testing at Johnston Atoll before beginning to systematize similar disposal facilities in the continental United States. The Army started construction of the chemical demilitarization facility at Tooele Army Depot. The Army completed the successful retrograde of all chemical munitions stored in Germany to storage facilities at Johnston Atoll. The Army initiated disposal of M55 rockets at Johnston Atoll. A very small amount of nerve agent leaked through the common stack during maintenance activities at Johnston Atoll. The agent release was below allowable stack concentration. The Army completed four operational verification tests at the Johnston Atoll Chemical Agent Disposal System. During the test, the Army destroyed more than 40,000 munitions containing nerve and mustard agents. In August 1993, the Secretary of Defense certified to the Congress that the Army has successfully completed the operational verification tests at Johnston Atoll. The National Defense Authorization Act for Fiscal Year 1991 (P.L. 101-510) restricted the use of funds to transport chemical weapons to Johnston Atoll except for U.S. munitions discovered in the Pacific, prohibited the Army from studying the movement of chemical munitions, and established the emergency preparedness program. The Army moved 109 World War II mustard-filled projectiles from the Solomon Islands to Johnston Atoll for storage and disposal. The National Defense Authorization Act for Fiscal Years 1992 and 1993 (P.L. 102-190) required the Secretary of Defense to develop a chemical weapons stockpile safety contingency plan. The U.S. Army Chemical Materiel Destruction Agency was established to consolidate operational responsibility for the destruction of chemical warfare capabilities into one office. The National Defense Authorization Act for Fiscal Year 1993 (P.L. 102-484) directed the Army to establish citizens’ commissions for states with storage sites, if the state’s governor requested one. It also required the Army to report on (1) disposal alternatives to the baseline incineration method and (2) plans for destroying U.S. nonstockpile chemical weapons and materiel identified in the Chemical Weapons Convention. The Johnston Atoll Chemical Agent Disposal System was shut down during operation and verification tests when residue explosive material generated during the processing of M60 105mm projectiles caught fire, causing damage to a conveyor belt and other equipment in the explosive containment room. The Army completed construction and started systemization of the Tooele Chemical Agent Disposal Facility. The Army issued its report on the physical and chemical integrity of the chemical stockpile to the Congress. A mustard leak from a ton container was discovered at Tooele Army Depot. The Army issued an interim survey and analysis report on the Nonstockpile Chemical Materiel Program to the Congress. Approximately 11.6 milligrams of nerve agent were released into the atmosphere at the Johnston Atoll during a maintenance activity on the liquid incinerator. The National Research Council issued its recommendations for the disposal of chemical agents and munitions to the Army. The Army issued its alternative demilitarization technology report to the Congress. The Army recommended the continuation of the chemical demilitarization program without deliberate delay and the implementation of a two-technology research and development program. (continued) The Army issued it M55 rocket stability report to the Congress. The report recommended that an enhanced stockpile assessment program be initiated to better characterize the state of the M55 rocket in the stockpile. The Army initiated the Alternative Technology Project to develop an alternative disposal technology to the baseline incineration process for the bulk-only stockpile locations in Maryland and Indiana. This research and development effort is conducted in conjunction with activities to implement the baseline program. The U.S. Army Chemical Materiel Destruction Agency was redesignated the U.S. Army Chemical Demilitarization and Remediation Activity after a merger with the U.S. Army Chemical and Biological Defense Command. In addition, the Army restructured and centralized its chemical stockpile emergency preparedness program to streamline procedures, enhance responsiveness of operations, and improve the budgeting process. The Assistant Secretary of the Army for Research, Development and Acquisition became the DOD Executive Agent for the Chemical Demilitarization Program, replacing the Assistant Secretary of the Army for Installations, Logistics, and Environment. The Chemical Demilitarization Program was designated a DOD Acquisition Category 1D Program. The Army initiated the Enhanced Stockpile Surveillance Program to investigate, develop, and support methods to improve monitoring and inspection of chemical munitions. The U.S. Army Chemical Demilitarization and Remediation Activity was renamed the Program Manager for Chemical Demilitarization. The Johnston Atoll Chemical Agent Disposal System surpassed the 1-million pounds target and completed the disposal of all M55 rockets stored on Johnston Atoll. Disposal rates exceeded established goals. A perimeter monitor located about 100 yards from the demilitarization building at Johnston Atoll detected a trace level of nerve agent. The source of the leak was identified as a door gasket in the air filtration system. Temporary air locks were erected and the gasket replaced. No one was harmed from this event. The Army awarded the contract for small burial sites and issued its implementation plan for the nonstockpile program. The Tooele Chemical Agent Disposal Facility completed equipment systemization testing. The Army certified to the Congress that all Browder Amendment requirements for the award of the Anniston construction contract were met. The National Defense Authorization Act for Fiscal Year 1996 (P.L. 104-106) directed DOD to conduct an assessment of the Chemical Stockpile Disposal Program and options that could be taken to reduce program costs. The Army completed disposal of all Air Force and Navy bombs stored on Johnston Atoll ahead of schedule. The Army awarded the systems contract for the construction, operation, and closure of the proposed Anniston Chemical Agent Disposal Facility. Construction of the facility is scheduled to begin after the state of Alabama issues the environmental permits. The Army started disposal operations at the Tooele Chemical Agent Disposal Facility. Shortly after the start, operations were shut down for a week after a small amount of agent was detected in a sealed vestibule attached to the air filtration system. No agent was released to the environment and no one was harmed. Several hair line cracks were discovered in the concrete floor of the Tooele disposal facility’s decontamination area. The cracks caused a small amount of decontamination solution to leak to a electrical room below. No agent was detected and the cracks were sealed. (continued) The 1997 National Defense Authorization Act (P.L. 104-201) directed DOD to conduct an assessment of alternative technologies for the disposal of assembled chemical munitions. The act also directed the Secretary of Defense to report on this assessment by December 31, 1997. The 1997 DOD Appropriations Act (P.L. 104-208) provided the Army $40 million to conduct a pilot program to identify and demonstrate two or more alternatives to the baseline incineration process for the disposal of assembled chemical munitions. The act also prohibited DOD from obligating any funds for constructing disposal facilities at Blue Grass and Pueblo until 180 days after the Secretary reports on the alternatives. The Chemical Weapons Convention was ratified by the 65th country needed to make the convention effective. As a result, the convention will go into effect April 29, 1997. Through ratification, the United States will agree to dispose of its (1) unitary chemical weapons stockpile, binary chemical weapons, recovered chemical weapons, and former chemical weapon production facilities by April 29, 2007, and (2) miscellaneous chemical warfare materiel by April 29, 2002. Chemical Weapons and Materiel: Key Factors Affecting Disposal Costs and Schedule (GAO/NSIAD-97-18, Feb. 10, 1997). Chemical Weapons Stockpile: Emergency Preparedness in Alabama Is Hampered by Management Weaknesses (GAO/NSIAD-96-150, July 23, 1996). Chemical Weapons Disposal: Issues Related to DOD’s Management (GAO/T-NSIAD-95-185, July 13, 1995). Chemical Weapons: Army’s Emergency Preparedness Program Has Financial Management Weaknesses (GAO/NSIAD-95-94, Mar. 15, 1995). Chemical Stockpile Disposal Program Review (GAO/NSIAD-95-66R, Jan. 12, 1995). Chemical Weapons: Stability of the U.S. Stockpile (GAO/NSIAD-95-67, Dec. 22, 1994). Chemical Weapons Disposal: Plans for Nonstockpile Chemical Warfare Materiel Can Be Improved (GAO/NSIAD-95-55, Dec. 20, 1994). Chemical Weapons: Issues Involving Destruction Technologies (GAO/T-NSIAD-94-159, Apr. 26, 1994). Chemical Weapons Destruction: Advantages and Disadvantages of Alternatives to Incineration (GAO/NSIAD-94-123, Mar. 18, 1994). Arms Control: Status of U.S.-Russian Agreements and the Chemical Weapons Convention (GAO/NSIAD-94-136, Mar. 15, 1994). Chemical Weapon Stockpile: Army’s Emergency Preparedness Program Has Been Slow to Achieve Results (GAO/NSIAD-94-91, Feb. 22, 1994). Chemical Weapons Storage: Communities Are Not Prepared to Respond to Emergencies (GAO/T-NSIAD-93-18, July 16, 1993). Chemical Weapons Destruction: Issues Affecting Program Cost, Schedule, and Performance (GAO/NSIAD-93-50, Jan. 21, 1993). Chemical Weapons Destruction: Issues Related to Environmental Permitting and Testing Experience (GAO/T-NSIAD-92-43, June 16, 1992). Chemical Weapons Disposal (GAO/NSIAD-92-219R, May 14, 1992). Chemical Weapons: Stockpile Destruction Cost Growth and Schedule Slippages Are Likely to Continue (GAO/NSIAD-92-18, Nov. 20, 1991). Chemical Warfare: DOD’s Effort to Remove U.S. Chemical Weapons From Germany (GAO/NSIAD-91-105, Feb. 13, 1991). Virgin Islands -six potential locations. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Under its basic legislative responsibilities, GAO assessed the Department of Defense's (DOD) chemical weapons and related materiel disposal programs, focusing on: (1) the key factors affecting the costs and schedules; (2) actions the Army has taken to improve the programs; and (e) alternatives for improving the programs' effectiveness and efficiency. GAO noted that: (1) while there is general agreement about the need to destroy the chemical stockpile and related materiel, progress has slowed due to the lack of consensus among DOD and affected states and localities about the destruction method that should be used; (2) as a result, the costs and schedules for the disposal programs are uncertain; (3) however, they will cost more than the estimated $23.4 billion above current appropriations and take longer than currently planned; (4) the key factors affecting the programs include the public concerns about the safety of incineration, the environmental process, the legislative requirements, and the introduction of alternative disposal technologies; (5) the Chemical Stockpile Disposal Program's cost and schedule are largely driven by the degree to which states and local communities are in agreement with the proposed disposal method at the remaining stockpile sites; (6) based on program experience, reaching agreement has consistently taken longer than the Army anticipated; (7) until DOD and the affected states and localities reach agreement on a disposal method for the remaining stockpile sites, the Army will not be able to predict the Chemical Stockpile Disposal Program's cost and schedule with any degree of accuracy; (8) moreover, many of the problems experienced in the stockpile program are also likely to affect the Army's ability to implement the Nonstockpile Chemical Materiel Program; (9) in addition, more time is needed for the Army to prove that its proposed disposal method for the nonstockpile program will be safe and effective and accepted by the affected states and localities; (10) in December 1994, DOD designated the Army's chemical demilitarization program, consisting of both stockpile and nonstockpile munitions and materiel, as a major defense acquisition program; (11) the objectives of the designation were to stabilize the disposal schedules, control costs, and provide more discipline and higher levels of program oversight; (12) in addition, Army officials have identified cost-reduction initiatives, which are in various stages of assessment, that could reduce program costs by $673 million; (13) recognizing the difficulty of satisfactorily resolving the public concerns associated with each individual disposal location, suggestions have been made by members of the Congress, DOD officials, and others to change the programs' basic approach to destruction; and (14) however, the suggestions create trade-offs for decisionmakers and would require changes in the existing legal requirements.
Army maintenance depots and arsenals were established to support Army fighting units by providing repair and manufacturing capability, in concert with the private sector, to meet peacetime and contingency operational requirements. In recent years, the Army has taken steps to operate these facilities in a more business-like manner, including generating revenues from their output to support their operations. The number of these facilities has been reduced and the size of their workloads and staffs have declined significantly. This reflects the downsizing that began in the late 1980s following the end of the Cold War and the trend toward greater reliance on the private sector to meet many of the Army’s needs. The Army relies on both the public and the private sectors to meet its maintenance, overhaul, repair, and ordnance manufacturing needs. Army depots and arsenals have a long history of service, and they are subject to various legislative provisions that affect the work they do as well as how it is allocated between the public and private sectors. Army maintenance depots were established between 1941 and 1961 to support overhauls, repairs, and upgrades to nearly all of the Army’s ground and air combat systems. Before the depots were established, some maintenance and repair work was performed at the Army’s supply depots and arsenals and some was performed by the private sector. However, before 1941 much of the equipment in use was either repaired in the field or discarded. Depot workload can be classified into two major categories: end items and reparable secondary items. End items are the Army’s ground combat systems, communications systems, and helicopters. Secondary items include various assemblies and subassemblies of major end items, including helicopter rotor blades, circuit cards, pumps, and transmissions. Several depots, particularly Tobyhanna, also do some manufacturing, but generally for small quantities of individual items needed in support of depot overhaul and repair programs. In 1976, 10 depots performed depot maintenance in the continental United States. By 1988 that number had been reduced to eight as a result of downsizing following the Vietnam War. Between 1989 and 1995, Base Realignment and Closure (BRAC) Commission decisions resulted in the closure of three more depots and the ongoing realignment of two others. At the end of fiscal year 1998, the 5 Army depots employed about 11,200 civilians, a 48-percent reduction from the 21,500 in fiscal year 1989. In fiscal year 1998, the depots received revenues of about $1.4 billion. Since the mid-1980s, depots have generally not been able to hire new government civilian employees because of personnel ceilings and, therefore, have used contractor personnel to supplement their workforce as necessary to meet workload requirements. Like the other services, operations of the Army depots are guided by legislative requirements. Section 2464 of title 10 provides for a Department of Defense (DOD)-maintained core logistics capability that is to be government-owned and -operated and that is sufficient to ensure the technical competence and resources necessary for an effective and timely response to a mobilization or other national emergency. Section 2466 prohibits the use of more than 50 percent of the funds made available in a fiscal year for depot-level maintenance and repair to contract for the performance of the work by nonfederal personnel. Section 2460 defines depot-level maintenance and repair. Section 2469 provides that DOD-performed depot-level maintenance and repair workloads valued at $3 million or more cannot be changed to contractor performance without the use of competitive procedures for competitions among public and private sector entities. A related provision in section 2470 provides that depot-level activities are eligible to compete for depot-level maintenance and repair workloads. The Army’s two remaining manufacturing arsenals were established in the 1800s to provide a primary manufacturing source for the military’s guns and other war-fighting equipment. Subsequently, in 1920, the Congress enacted the Arsenal Act, codified in its current form at 10 U.S.C. 4532. It requires that the Army have its supplies made in U.S. factories or arsenals provided they can produce the supplies on an economic basis. It also provides that the Secretary of the Army may abolish an arsenal considered unnecessary. It appears that the act was intended to keep government-owned arsenals from becoming idle and to preserve their existing capabilities to the extent the capabilities are considered necessary for the national defense. The Army implements the act by determining, prior to issuing a solicitation to industry, whether it is more economical to make a particular item using the manufacturing capacity of a U.S. factory or arsenal or to buy the item from a private sector source. Only if the Army decides to acquire the item from the private sector is a solicitation issued. As the domestic arms industry has developed, the Army has acquired from industry a greater portion of the supplies that in earlier years had been furnished by arsenals. Following World War II, the Army operated six major manufacturing arsenals. Since 1977, only two remain in operation.Table 1.1 provides information on the six post-World War II arsenals, including operating periods and major product lines. Today the two arsenals manufacture or remanufacture a variety of weapons and weapon component parts, including towed howitzers, gun mounts, and gun tubes. At the end of fiscal year 1998, the Rock Island and Watervliet facilities employed a total of about 2,430 civilians, a 46-percent reduction from a total of about 4,500 employees at the end of fiscal year 1989. In fiscal year 1998, the two arsenals received about $199 million in revenues. Funding for day-to-day operations of Army depots and arsenals is provided primarily through the Army Working Capital Fund. The services reimburse the working capital fund with revenues earned by the depots and arsenals for completed work based on hourly labor rates that are intended to recover operating costs, including material, labor, and overhead expenses. While Army depots and arsenals are primarily focused on providing the fighting forces with required equipment to support readiness objectives, the industrial fund was intended to optimize productivity and operational efficiencies. Army industrial activities are supposed to operate in a business-like manner, but they are expected to break even and to generate neither profits nor losses. Nonetheless, these military facilities may sometimes find it difficult to follow business like practices. For example, Army requirements may make it necessary to maintain capability to perform certain industrial operations even though it would not seem economical—from a business perspective—to do so. Systems with older technology must be maintained even though acquiring repair parts becomes more difficult and expensive. If military customers need products that are inefficient to produce, the depots and arsenals must produce them anyway. To compensate the depots and arsenals for the cost of maintaining underutilized capacity that might be needed in the future, these activities receive supplemental funding in the operations and maintenance appropriation under an account entitled “underutilized plant capacity.” As shown in table 1.2, funding of this account has been reduced in recent years. Army officials stated that the reduction was made to fund other higher priority programs; however, they stated that in future years, this trend would likely be reversed. The Army Materiel Command (AMC) and its subordinate commands hold semiannual workload conferences to review, analyze, document, and assign work to the five depots. In contrast, the arsenals actively market their capabilities to DOD program management offices to identify potential customers. Despite differences in how they obtain their work, depots and arsenals are alike in how they set rates for their work. The process they use begins about 18 months prior to the start of the fiscal year in which maintenance and manufacturing will be performed. Depot and arsenal managers propose hourly rates to recover operating costs based on the anticipated level of future workload requirements, but rates are ultimately determined at the Department of Army and DOD levels. Rate setting is an iterative process that begins with the industrial activities and the Industrial Operations Command (IOC), a subordinate command under AMC. After they reach agreement, the proposed rates, which are included in consolidated depot and arsenal budgets, are forwarded for review up the chain of command. These commands frequently revise the rates initially requested by IOC based on past performance and other evolving workload and staffing information. When rates are reduced, the industrial activities must find ways to cut costs or increase workload to end the year with the desired financial outcome, which is usually to have a cumulative zero net operating result. However, even if the proposed rates are approved without modification, the performing industrial activity can end the year in better or worse financial shape than originally anticipated, depending on whether or not actual costs and workload are as anticipated. This can necessitate a rate increase in a subsequent year to offset the losses of a prior year, or a rate reduction to offset profits. Depots and arsenals employ direct labor workers who charge time to finite job taskings, earning revenue for the business. In addition, they employ a number of indirect workers, such as shop supervisors and parts expediters, whose time cannot be related to a finite job order but nevertheless support the depot maintenance and arsenal manufacturing process. Likewise, the industrial facilities also employ a variety of general and administrative overhead personnel such as production managers, technical specialists, financial managers, personnel officers, logisticians, contracting officers, computer programmers, and computer operators.While the time spent by these two categories of overhead personnel is difficult to relate to a finite job order, their costs are nevertheless reflected in the overall rates charged by the industrial activities. AMC is responsible for management control and oversight of the Army’s industrial facilities. The Army’s IOC—a subordinate command under AMC—had management responsibility for both arsenals and depots. That began to change in November 1997, when under a pilot program, management responsibility for workloading and overseeing work at the Tobyhanna Army Depot was transferred to the Communications-Electronics Command, the depot’s major customer. The Army completed the transfer of operational command and control for the Tobyhanna depot in October 1998 and plans to complete transfer of management responsibilities for the other depots in October 1999. Each depot will be aligned with its major customer, which is also the coordinating inventory control point for the depot’s products. Table 1.3 summarizes the upcoming management relationship for each Army depot and lists its principal workloads. Upon completion of the transfers of management responsibilities for depots, the IOC workforce will be reduced by about 280 positions out of a current staff level of about 1,400 personnel at the end of fiscal year 1998. The gaining commands will not get additional manpower positions. AMC is assuming that the gaining commands will be able to take on these added responsibilities with no increase in staff. These reductions are in addition to the 1,720 personnel reductions that IOC previously planned to make within the individual depots during fiscal years 1998 and 1999—many of which were put on hold because of section 364 of the National Defense Authorization Act for Fiscal Year 1998. Section 364 prohibits the Army from initiating a reduction in force at five Army depots participating in the demonstration and testing of the Army Workload and Performance System (AWPS) until after the Secretary of the Army certifies to the Congress that AWPS is fully operational. It exempts reductions undertaken to implement 1995 BRAC decisions. Current plans call for the arsenals to remain under the management and control of IOC. Also, the arsenals are not currently precluded by section 364 from reducing their workforce. Accordingly, to adjust the workforce to more accurately reflect the current workload, the two arsenals are in the process of reducing their workforce by a total of over 300 positions out of 2,700. Figure 1.1 shows the locations of the Army’s industrial facilities and each major command to be responsible for management control and oversight. In recent years, several audit reports have highlighted the Army’s inability to support its personnel requirements on the basis of analytically based workload forecasts. For example, the Army Audit Agency reported in 1992 and 1994 that the Army did not know its workload and thus could not justify personnel needs or budgets. In several more recent audits, the Army Audit Agency recommended declaration of a material weakness in relating personnel requirements to workload and budget. In DOD’s fiscal year 1997 Annual Statement of Assurance on Management Controls, DOD noted a material weakness in its manpower requirements determination system. It noted that the current system for manpower requirements determination lacked the ability to link workload, manpower requirements, and dollars. Thus, the Army was not capable of rationally predicting future civilian manpower requirements based on workload. As a result, managers at all levels did not have the information they needed to improve work performance, improve organizational efficiency, and determine and support staffing needs, manpower budgets, and personnel reductions. In response to concerns about its workforce planning, the Army has sought to implement a two-pronged approach to evaluating its workforce requirements. This includes implementing a 12-step methodology analysis and developing an automated system for depots, arsenals, and ammunition plants that is referred to as AWPS. In February 1998, we reported that the Army had developed this corrective action plan to resolve its material weakness but that it might have difficulty achieving the expected completion date. The 12-step methodology, adopted by the Army in April 1996, is a largely manual process that provides a snapshot of personnel requirements designed to link personnel requirements to workload at various headquarters commands and organizations. The methodology includes analyses of missions and functions, opportunities to improve processes, workload drivers, workforce options (including civilian versus using military personnel and contracting versus using in-house personnel), and organizational structure. It also looks for ways to consolidate and create more effective use of indirect and overhead personnel assigned to Army industrial activities. Figure 1.2 shows the components of the 12-step method. The development of AWPS resulted from an Army effort initiated in July 1995 to have a contractor survey leading edge commercial and public sector entities to identify their “best practices” for determining personnel requirements based on a detailed analysis of work to be performed. The contractor concluded that a computer-based system developed by the Naval Sea Logistics Center, Pacific, for use in naval shipyards provided the greatest potential for documenting personnel requirements at Army industrial activities. Consequently, in March 1996, the Army provided funding to a support contractor and the Navy to develop and implement a modified version of the Navy’s computer-based process at Army maintenance depots to support the maintenance function. The AWPS system is designed to facilitate evaluation of what-if questions, including workload and personnel requirements analyses. The evolving system currently consists of three modules—performance measurement control, workload forecasting, and workforce forecasting—to integrate workload and workforce information to determine personnel requirements for various levels of work. The system provides two primary management information products—information concerning the production status on specific project orders and information concerning workload forecasts and related workforce requirements. The Chairman of the Subcommittee on Readiness, House Committee on National Security, asked us to examine selected workforce issues pertaining to the Army’s depots, focusing particularly on the Corpus Christi depot, where significant difficulties were encountered in implementing a planned personnel reduction during 1997. Subsequently, Congressman Lane Evans requested that we examine workforce issues at the Army’s manufacturing arsenals. Accordingly, this report focuses on (1) whether the Army had a sound basis for personnel reductions planned at its depots during a 2-year period ending in fiscal year 1999; (2) progress the Army has made in developing an automated system for making depot staffing decisions based on workload estimates; (3) other factors that may adversely impact the Army’s ability to improve the cost-effectiveness of its depot maintenance programs and operations; and (4) workload trends, staffing, and productivity issues at the Army’s manufacturing arsenals. This is one of a series of reports (see related GAO products at the end of this report) addressing DOD’s industrial policies, outsourcing plans, activity closures, and the allocation of industrial work between the public and private sectors. To determine whether the Army had a sound basis for personnel reductions, we reviewed the rationale, support, status, and resulting impact of the Army’s proposal to reduce staffing at its depots. We interviewed resource management personnel at Army headquarters, Army Materiel Command, the Army Industrial Operations Command, the Army Aviation and Missile Command, and the Corpus Christi Army Depot where we obtained information on the Army’s reasons for proposed staffing reductions, and reviewed documentation supporting the Army’s proposed staff reduction plan. We discussed staff reduction and related issues with Army Audit Agency officials. To ascertain the Army’s progress in developing workload-based staffing estimates, we met with officials from the Naval Sea Logistics Center, Pacific, which is modifying previously existing Navy programs to fit the Army depot and arsenal scenarios. We also interviewed key Norfolk Naval Shipyard and Navy headquarters personnel who have used the Navy’s automated workforce planning system. We visited Corpus Christi, Letterkenny, and Tobyhanna depots to obtain information on the implementation of AWPS and to observe depot employees’ use of AWPS-generated data. We also reviewed the results of the Army Audit Agency’s audit work regarding the implementation of personnel downsizing and regarding the development and testing of the AWPS system. To identify factors that may adversely impact the Army’s ability to improve the cost-effectiveness of its depot maintenance operations, we analyzed financial and productivity data for each of the depots and discussed emerging issues with Headquarters IOC, depot, and commodity command officials. We also visited the Corpus Christi, Letterkenny, and Tobyhanna depots to obtain information on various aspects of their operation and management. We visited the Naval Air Systems Command, Patuxent River, Maryland to follow up on Corpus Christi Army Depot problems associated with performing Navy workload. During subsequent depot and arsenal visits, we asked questions about the scheduling of work, parts availability, overtime, movement of personnel, and related topics. We also visited selected Army repair facilities that perform depot-level tasks but are not recognized as traditional depot-level maintenance providers. We also conducted literature and internet searches of appropriate topics. To review workload, staffing, and productivity issues at Army arsenals, we interviewed personnel at the Army Industrial Operations Command, which provides management control and oversight for the manufacturing arsenals. We reviewed back-up documentation supporting proposed staffing reductions and the reasonableness and support assumptions on which staff reduction proposals were based. We visited the two arsenals and met with a variety of key management personnel to discuss and obtain their views on various workload and staffing issues. We performed work at the following activities: Department of Army Headquarters, Washington, D.C. Army Materiel Command, Alexandria, Va. Army Industrial Operations Command, Rock Island, Ill. Army Aviation and Missile Command, Huntsville, Ala. Corpus Christi Army Depot, Corpus Christi, Tex. Letterkenny Army Depot, Chambersburg, Pa. Tobyhanna Army Depot, Tobyhanna, Pa. Rock Island Arsenal, Rock Island, Ill. Watervliet Arsenal, Watervliet, N.Y. Aviation Classification Repair Activities Depot (Army National Guard), Groton, Conn. Fort Campbell, Fort Campbell, Ky. Fort Hood, Killeen, Tex. Management Engineering Activity, Chambersburg, Pa. Naval Air Systems Command, Patuxent River, Md. Naval Sea Systems Command, Arlington, Va. Commander in Chief, U.S. Atlantic Fleet, Norfolk, Va. Norfolk Naval Shipyard, Portsmouth, Va. Army Audit Agency, Alexandria, Va. We conducted our work between September 1997 and August 1998 in accordance with generally accepted government auditing standards and generally relied upon Army provided data. While reviewing AWPS generated data, we noted significant errors, particularly early in the audit, and did not utilize that information other than to note its occurrence. A variety of weaknesses were contained in IOC’s analysis supporting its plan to eliminate about 1,720 depot jobs over a 2-year period ending in fiscal year 1999. Those weaknesses accentuated previously existing concerns about the adequacy of the Army’s workforce planning. The lack of an effective manpower requirements determination process has been an Army declared internal control weakness, for which several corrective actions are in process, including the development and implementation of an automated workload and workforce planning system. An initial attempt to implement the planned reductions at the Corpus Christi Army Depot proved chaotic and resulted in unintended consequences from the termination of direct labor employees who were needed to support depot maintenance production requirements. While the Army was proceeding with efforts to strengthen its workforce planning capabilities during this time, those capabilities were not sufficiently developed to be used to support the IOC’s analysis. The Army has made progress in establishing AWPS—its means for analyzing and documenting personnel requirements for the maintenance function—and is approaching the point of certifying its operational status to the Congress. However, while the current version of the system addresses direct labor requirements, it does not address requirements for overhead personnel—an important issue in the ill-planned 1997 reduction of personnel at the depot in Corpus Christi, Texas. The Army’s plan for reducing the workforce at its depots had a number of weaknesses and did not appear to be consistent with its own policy guidance. Army Regulation 570-4 (Manpower Management: Manpower and Equipment Control) states that staffing levels are to be based on workloads to be performed. However, our work indicates that the Army’s plan for reducing staff levels at its depots was developed primarily in response to affordability concerns and was intended to lower the hourly rates depots charge their customers. The plan was not supported by a detailed comparison of planned workload and related personnel requirements. Army officials stated that incorporation of the 12-step process into AWPS will help the Army address affordability while directly linking manpower to funded workload, assuming that the Army ensures accuracy and reliability of AWPS data input, both by the planners and via the shop floor. In July 1996, as part of its review of proposed rates, AMC headquarters determined that the hourly rates proposed by the Army depots for maintenance work in fiscal year 1998 were generally unaffordable. It concluded that depot customers could not afford to purchase the work they needed. The Army’s depot composite rate for fiscal year 1998 was over 11 percent higher than the composite rate for fiscal year 1996. Table 2.1 provides a comparison of the initial rates requested by each Army depot for fiscal year 1998, the final rates approved for that year by Headquarters AMC and the Army staff, and the percentage difference. AMC Headquarters officials stated that in recent years the depot rates had increased to the point that, in some cases, they were not affordable. IOC officials stated that since they had to reduce the rates quickly, they had little choice but to require staff reductions. Reported personnel costs in fiscal year 1997 comprise about 46 percent, material and supplies about 29 percent, and other miscellaneous costs about 25 percent of depots’ operating costs. As shown in table 2.1, the rate reduction varied by depot. Unlike the other depots where IOC set a lower rate, IOC set the rate at the Red River depot higher than depot officials requested. However, the rate set was still not high enough to cover estimated costs at that depot. An IOC official stated that if the Red River depot had charged its customers based on the estimated costs of operations at that facility, including recovery of previous operating losses, the composite rate would have been over $174 per hour in fiscal year 1998. Having made the decision to reduce the rates through staffing cuts, what remained to be done was to develop a depot staff reduction plan. The initial plan developed by IOC headquarters personnel eliminated about 1,720 depot jobs. The proposal would have affected personnel at three of the five maintenance depots—Corpus Christi, Letterkenny, and Red River. To determine the staff reduction plan, IOC headquarters used a methodology that considered direct labor requirements, overhead requirements, and employee overtime estimates. We analyzed these factors and determined that (1) the direct labor requirements were based on unproven productivity assumptions, (2) the overhead personnel requirements were based on an imprecise ratio analysis, and (3) unrealistic quantities of overtime were factored into the analysis. Table 2.2 shows the number of positions originally scheduled for elimination at each depot for fiscal years 1998 and 1999. To determine and justify the number of required direct labor employees, IOC divided the total anticipated workload (measured in direct labor hours) by a productive workyear factor. This factor represents the amount of work a direct labor employee is estimated to be able to accomplish in 1 fiscal year. IOC used a variety of assumptions to support its position that the number of depot personnel could be reduced. IOC’s analysis used productive factors that are substantially higher than either the DOD productive workyear standard or the historical average achieved in the recent past by Army depots. For example, IOC’s analysis assumed that each Corpus Christi depot direct labor employee would accomplish 1,694 hours of billable time, not including paid overtime hours, in a workyear. However, while DOD’s productive workyear standard for direct labor depot maintenance employees is 1,615 hours per person, the Corpus Christi depot direct labor employees averaged a reported 1,460 hours of billable time in fiscal year 1997 and 1,528 hours in fiscal year 1996. By using the higher productivity level, the IOC analysis showed the Corpus Christi depot would need 14 percent fewer employees, based on the change in this factor. Table 2.3 provides a comparison of IOC’s worker productivity assumptions for each depot and the actual reported productivity levels for fiscal year 1997. While the DOD productive workyear standard assumes that each direct labor worker will achieve 1,615 hours of billable time each year, the depots have been unable to achieve this goal. Several factors affect this productivity level. First, due to workforce seniority, Corpus Christi depot workers have recently reported using an average of 196 hours of paid annual leave per year. This is higher than the reported 175 hours of annual leave used on average at all Army activities as well as the reported 167-hour average annual leave used at other government agencies. In addition, Corpus Christi depot employees used a reported average of about 112 hours of sick leave per year—more sick leave than they earn in a given year and about 50 percent higher than other Army, DOD, and government activities. The reported Army-wide average sick leave use was 73 hours; the DOD average, 78 hours; and the governmentwide average, 74 hours. Several depot management officials commented that while they monitor sick leave usage, it has increased partly as a result of the older workforce and partly as a result of the Federal Employees Family Friendly Leave Act, Public Law 103-338, October 22, 1994, which allows the use of sick leave to care for family members and for bereavement purposes. Second, because most depot employees at the Corpus Christi and Red River depots are working a compressed work schedule of four 10-hour workdays, they receive 100 hours of paid holiday leave per year. In contrast, a government employee who works a 5-day 8-hour workweek, receives 80 hours of paid holiday leave per year. Third, the depots’ direct labor workers charge varying amounts of overhead (nonbillable) time for training, shop cleanup, job administration, temporary supervision, certain union activities, and other indirect activities. In fiscal year 1997, direct labor workers’ charges to overhead job orders ranged from a reported average of 125 hours at the Letterkenny depot to 205 hours at the Corpus Christi depot. To determine and justify the number of required overhead employees, IOC used a ratio analysis that essentially allowed a specified percentage of overhead employees for each direct labor worker. IOC officials told us that they believed the depots had too many overhead personnel and they had developed a methodology to base overhead personnel requirements on predetermined ratios of direct to overhead employees. IOC developed its methodology and the ratios based on actual direct and overhead employee ratios for a private-sector firm tasked with operating a government-owned, contractor-operated Army ammunition plant. Different ratios were assigned based on the number of functions each depot organization performs—such as maintenance, ammunition storage, or base operation support. The IOC ratio analysis assumed that for every 100 direct labor employees, a single-function depot organization could have no more than 40 overhead personnel, a dual-function depot organization no more than 50 overhead personnel, and a three-function depot organization no more than 60 overhead personnel. Table 2.4 provides a summary of ratios IOC used to determine the number of overhead employees. A number of concerns have been raised about the use of these ratios. For example, in 1997 the then Deputy Under Secretary of Defense (Logistics) stated that the use of such ratios may provide only marginal utility in identifying potentially excess employees and inefficient depot operations. He noted that ratio analysis may not consider the value of productivity enhancements that result from the acquisition of increasingly sophisticated technology to accomplish depot missions, which in turn causes direct labor requirements to decrease, while the overhead labor requirements increase. Depot officials similarly noted that technology enhancements over the past few years have significantly reduced direct labor requirements, while sometimes increasing overhead in the depots, particularly when training and maintenance costs increase. They noted that IOC’s methodology did not consider the impact of various efficiency enhancements that eliminated substantial numbers of direct labor positions and added a smaller number of overhead positions. These enhancements include the replacement of conventional labor-intensive lathes with state-of-the-art numerically controlled devices, hundreds of conventional draftsmen with a few technicians having computer-aided design skills, and numerous circuit card repair technicians with multimillion-dollar devices that make and repair circuit cards. Our discussions with depot officials and a support contractor raised similar concerns, including not considering and analyzing (1) differences in the complexity of work being performed in different depots, (2) requirements for government organizations to maintain certain overhead activities that are not required in the private sector, (3) differing policies in the way depots classify direct and overhead labor, (4) allowances for private sector contractors that perform supplemental labor, (5) the extent to which direct personnel work overtime, and (6) the extent to which contractors perform overhead functions. Army officials stated that the ratios were not developed using a sound analytical basis, but said that determining overhead requirements is not, by its very nature, a precise science. While we recognize the challenge that this presents, we have stated in the past that until a costing system, computer-based methodology, and 12-step methodology are fully developed and integrated, the Army cannot be sure that it has the most efficient and cost-effective workforce. Although the 12-step process also calls for the use of ratios in some cases, these ratios are based on methodologies that produce finer degrees of precision. The process also calls for the use of more appropriate mixes of fixed and variable overhead personnel. Nonetheless, we share IOC officials’ concerns that the Army depots have too much overhead. We have reported that this is in part a consequence of having underutilized depot facilities. Thus, personnel reductions alone, without addressing excess infrastructure issues, cannot resolve the Army’s problem of increasing maintenance costs reflected in its depot rate structure. In commenting on a draft of this report, DOD acknowledged that the methodology the Army used to project workload requirements lacked the precision that would have been available if AWPS had been fully implemented and workload projections were more realistic. While DOD stated that the personnel reduction process received intense scrutiny, implementation of its plan achieved its main objective, which was a reduction in indirect personnel costs that it believed would lead to unaffordable rates. IOC’s staff reduction plan was developed using the assumption that when the suggested personnel restructuring was completed the remaining direct labor employees would be expected to work varying amounts of overtime to accomplish their planned maintenance workloads. In fiscal years 1998 and 1999, Corpus Christi Army Depot direct employees would be expected to work overtime that averaged about 16 and 12 percent, respectively, of their regular time hours. IOC personnel stated that it is less expensive to pay overtime rates than to have more employees charging an equivalent number of straight time hours, particularly given the uncertainties regarding the amount of forecasted workload that might not materialize. Historically, Army depot employees have performed varying amounts of overtime. For example, in fiscal year 1996, the Army maintenance depots reportedly averaged 13-percent overtime, with individual depot overtime rates ranging from a low of about 4 percent at the Tobyhanna depot, to a high of about 19 percent at the Corpus Christi depot. Although Corpus Christi originally planned for about 6-percent overtime for direct personnel during fiscal year 1998, the plan was revised to its current 15.8-percent overtime plan and unplanned requirements caused average reported overtime by direct employees to approach 30 percent in some months, with individual rates ranging from 0 to over 50 percent. Using overtime could provide a cushion against workload shortages, as opposed to a short-term alternative of hiring people to cover unanticipated increases in workloads; however, to plan for average overtime rates of up to 15.8 percent appears to be beyond the norm for such types of activities, particularly when unplanned requirements could drive the overtime usage substantially above the levels that were planned. For example, we compared the 1997 Bureau of Labor Statistics durable goods manufacturing work week, including overtime, which averaged about 42.8 hours, with comparable data for Corpus Christi and noted that a 15.8-percent overtime figure corresponds to a 46.3 hour work week, while 30 and 50 percent overtime figures correspond to workweeks of 52 and 60 hours, respectively. AMC efforts to implement its planned reductions at its Corpus Christi depot proved to be extremely chaotic and resulted in unintended consequences. The enactment of section 364 of the 1998 Defense Authorization Act restricted further personnel reductions, except those that are BRAC-related. Army officials stated that when it became apparent that the incentivesbeing offered to indirect personnel in exchange for voluntary employment terminations would not achieve the desired reduction of 336 employees, similar offers were extended to include direct personnel. These officials stated that incentive offers were made to direct labor employees, only when the position held by the terminated direct laborer could be filled by an indirect labor person, who otherwise would face involuntary separation. Notwithstanding that requirement, any depot employee—indirect or direct—was allowed to separate until the desired goal of eliminating 336 employees was reached. Consequently, some direct employees separated, which further exacerbated an existing productivity problem. The congressional action followed and postponed completion of the staff reduction plan until AWPS was certified as operational. According to headquarters AMC officials, command industrial activities had too many overhead personnel and the depots could eliminate some of these positions without adversely affecting productivity. To avoid an involuntary reduction in force targeting overhead positions, they developed a plan to encourage voluntary separations. AMC authorized the use of financial incentives, including cash payments and early retirement benefits, and authorized the extension of this offer to direct personnel. At the Corpus Christi depot, 336 personnel voluntarily terminated their employment in 1997 under the Army’s staff restructuring plan—55 personnel left through normal attrition and 281 personnel were offered financial incentives to encourage their terminations. In June and July 1997, this latter group was tentatively approved for various financial incentives in return for voluntary termination of employment. By the end of June 1997, paperwork authorizing voluntary retirements with cash incentives was approved for some employees while still pending for others. Some left the Corpus Christi area thinking they had been granted authorization to leave and receive cash incentive payments. However, at this same time, headquarters AMC was addressing numerous questions regarding the appropriateness of the staff reduction effort, given the size of the depot’s scheduled workload. As a result of these questions, Headquarters, AMC, asked the Army Audit Agency to review and comment on the documentation supporting the recommended staff cuts. Army Audit Agency personnel compared the IOC’s assessment of personnel requirements against computer-generated forecasts from the AWPS, which was still under development. The auditors, using AWPS-generated products as their primary support, concluded on June 27, 1997, that personnel cuts were not necessary. Furthermore, the auditors concluded that, based on AWPS calculations, rather than lose personnel, Corpus Christi depot would need to hire 44 additional personnel. On July 1, 1997, in response to the Army Audit Agency findings, the Army directed its personnel offices to stop processing paperwork for voluntary separations and financial incentives. On July 2, 1997, Corpus Christi personnel officers were directed to recall the more than 190 employees whose applications had not been fully approved. This event caused a great deal of concern, both among the affected personnel and the workforce in general. According to cognizant Corpus Christi depot personnel officials, some of the employees had taken separation leave, others had sold their residences, and still others had moved out of state and bought new homes. Subsequently, the Army organized a task force including representatives from AMC, IOC, the Army Audit Agency, and depot management to review and validate information contained in the AWPS computational database. The team found that one major Corpus Christi customer had incorrectly coded unfunded workload requirements totaling $70 million as if they were funded, having the effect of overstating personnel requirements. This process left unclear the precise number of employees that were needed to support the approved depot workload. Nevertheless, after 3 to 4 weeks of what depot officials described as zero productivity, the Army declared that documentation supporting IOC’s recommended reductions was accurate and employees were given permission to depart. In offering financial separation incentives at the Corpus Christi depot during fiscal year 1997, AMC did not limit the separation opportunities to overhead personnel. They did not think the desired number of workers would volunteer, if the incentives were restricted to overhead personnel only. Further, headquarters personnel did not want to require involuntary separations. Of the 281 personnel separating with incentives from the Corpus Christi depot, 147 were classified as direct labor and 134 as overhead personnel. Including those separating without incentives, 187 direct labor employees were separated from Corpus Christi. Given the potential imbalances in the workforce caused by the planned personnel separations, Corpus Christi management and union personnel jointly developed a plan to transfer indirect employees to fill vacated direct labor jobs. These procedures were adopted before any incentive offers were made and were designed to avoid the involuntary separation of indirect personnel by retraining them to assume direct labor jobs vacated by senior personnel accepting incentive offers. The plan required that 49 overhead employees complete various training programs before they could assume the targeted direct labor position. However, progress toward achieving these objectives has been slower than expected. The depot initially expected to backfill vacant direct labor jobs by January 1998, but in May 1998 when we visited the depot, only one-third of the 49 overhead personnel scheduled to be retrained had moved to their newly assigned jobs and begun their conversion training and by mid-July, depot officials advised that 80 percent had moved to new positions. In commenting on a draft of this report, Army officials stated that these conversions were scheduled to be completed in November 1998. However, depot officials also told us that it takes between 3 and 4 years to retrain a typical indirect employee as a direct employee. According to depot personnel, the loss of 187 experienced direct labor employees exacerbated the existing productivity problem at the Corpus Christi depot. To fill in the need for direct labor, employees worked a reported average of 19 percent overtime, and the depot had to use 113 contractor field team personnel in addition to the 70 contractor personnel already working in the depot. Nonetheless, the depot has had major problems meeting its production schedule and, as discussed further in the next chapter, may lose repair work from the Navy, except for crash damage work. Subsequently, the Congress enacted the section 364 legislation, which was effective November 18, 1997, postponing involuntary reductions until the Army had certified it had an operational automated system for determining workload and personnel staffing. As a result, the balance of IOC’s proposed staff reductions planned for fiscal year 1999 was deferred. Army efforts to develop AWPS have proceeded to the point that required certification to the Congress of its operational capability is expected soon. Even so, efforts will be required to ensure that accurate and consistent workload forecasting information is input to the system as it is used over time. The Army recently completed development and prototype testing of a system enhancement to provide automated support for determining indirect and overhead personnel requirements. Based on our draft report recommendations, the Army plans to postpone AWPS certification until this system improvement is operational at all five maintenance depots. In May 1996, the Army completed installation and prototype testing of the AWPS at the Corpus Christi Army Depot. In June 1997, it announced plans to extend the AWPS process to other Army industrial facilities, including manufacturing arsenals and ammunition storage sites. At the same time, the Army expected that implementation of AWPS at the five maintenance depots would be completed in August 1997. Congressional certification as required by section 364 of the 1998 Defense Authorization has not yet occurred. In March and April 1998, a team of representatives from various AMC activities, in consultation with the Army Audit Agency, developed AWPS acceptance criteria, that were later accepted by the Assistant Secretary of the Army for Manpower and Reserve Affairs. Army auditors compared acceptance criteria to actual demonstrated experience and reported that the system is operational at all five depots, system programming logic is reasonably sound, and AWPS performance experience satisfies the Army’s acceptance criteria. In August 1998, Army officials stated that the Secretary of the Army could make the mandated certification of successful implementation of computer-based workload and personnel forecasting procedures at Army maintenance depots within the next few months. Army officials stated that several planned system enhancements have not yet been implemented, but they do not believe these items would preclude the Secretary from certifying successful completion of AWPS implementation. However, in its written comments to our draft report, DOD stated the Army now plans to postpone AWPS certification until an automated support module for determining indirect and overhead personnel requirements is fully operational at each of the five maintenance depots. Assuming successful system implementation, future reliability of the system will depend upon the availability and entry of accurate and consistent data imported to AWPS and used to generate system products. The AWPS system provides three primary management information products—information concerning production performance on specific project orders and information concerning workload forecasts and related workforce requirements. The AWPS system receives and processes data from several computerized Army support systems, including the Standard Depot System, Automated Time Attendance and Production System, Headquarters Application System, and Maintenance Data Management System. The Standard Depot System and Automated Time and Attendance System input project status and expense information from the depot perspective. The Headquarters Application System provides status and planned workload data from the IOC perspective, and the Maintenance Data Management System provides workload data from the Army commodity command (major customer) perspective. Army leadership, in 1997, asked the Army Audit Agency to review and validate the proposed depot personnel reductions. Although the system was still being developed, this early experience demonstrated the vulnerability of personnel requirement statements if the computational database contains errors and inconsistencies. The Army Audit Agency identified problems that resulted because AWPS-generated staffing estimates were based on inaccurate workload forecasts imported to the AWPS computational database. During the implementation period, the Army periodically compared AWPS data with similar information contained in the other computerized support systems and found numerous inconsistences. Other data inaccuracies stemmed from employees’ not correctly charging time to job codes on which they were working and the reporting of job codes that were not recognized by the AWPS system. In July 1998, the Army Audit Agency reported that comparisons of data contained in AWPS and several support systems have improved to the point that system managers believe the system logic and AWPS-processed data are reasonably sound. As of August 10, 1998, the Army had not updated and entered several critical items into the automated workforce forecasting subsystems. These items included (1) updating personnel requirements for overhead personnel based on the approved 12-step process and (2) developing a database of employee skills and a breakdown of depot workload tasks by required job skills. However, as noted in its comments to a draft of this report, DOD stated that the Army planned to postpone certifying this system as operational until it incorporates automated procedures for determining indirect personnel requirements. This should enhance the effectiveness of the AWPS system. AWPS was initially envisioned only as a tool for documenting requirements for direct labor. However, in May 1998 the Army determined that it would integrate an automated version of the 12-step process into the AWPS system. The model estimates for each maintenance shop and support function the required fixed and variable overhead personnel that are needed to support the direct workload. Because the model is customized to meet individual depot needs, a 50-person sheet metal shop may have overhead requirements different from a similarly sized electronics shop. In October 1998, Army officials stated that the Army had installed an automated 12-step process for predicting overhead personnel requirements at each of the five maintenance depots and that the depots were developing input data required by the system’s computational database. The Army also plans to enhance the current AWPS system by adding an automated database reflecting specific skills of each depot’s employee. Work on this system enhancement is expected to be completed in January 1999. The Army anticipates that the automated database will enable the depots to estimate personnel requirements for each specific job specialty and facilitate identification and movement of skilled workers between shops to offset short-term labor imbalances. The Army did not have a sound methodology for projecting workforce requirements; this led to a highly undesirable set of events that resulted in the voluntary separation of direct labor employees, which negatively impacted employees and depot productivity. Also, given the need to use contract labor and the plan to have depot employees consistently work substantial amounts of overtime, it is questionable whether all of the reductions of direct labor personnel were appropriate. This situation also illustrates the challenge of targeting reductions at the depots in areas where there are excess personnel and providing the required training to workers when skill imbalances occur, as a result of transfers. We believe the Army’s inability to deal with the perceived need for reducing overhead requirements prompted the chaotic staff reduction effort at the Corpus Christi depot. Further, incorporation of the capability to address overhead requirements is an essential element of an effective AWPS system. The Army’s current plan to postpone certifying the AWPS system as operational until it incorporates procedures for determining indirect personnel requirements should enhance the overall effectiveness of the system. We recommend that the Secretary of Defense require the Secretary of the Army, in making future personnel reductions in Army depots, to more clearly target specific functional areas, activities, or skill areas where reductions are needed, based on workload required to be performed. We also recommend that the Secretary of the Army complete incorporating an analysis of overhead requirements into AWPS prior to certifying the system, pursuant to section 364. DOD concurred with the recommendations. It stated that the development and testing of an automated process for predicting indirect and overhead personnel requirements would be completed before the system is certified as operational at maintenance depots. We modified our conclusions and recommendations to reflect the actions being taken by the Army in response to our draft report. Specifically, we now recommend that the Army complete ongoing actions that it initiated in response to our draft report recommendations. We also incorporated technical comments that were provided by DOD where appropriate. While the Army has made progress in establishing an automated process for analyzing and documenting personnel requirements, it is still faced with larger issues and factors that overshadow efforts to improve workload forecasting and efficient depot operations. First, workload estimates have been subject to frequent fluctuation and uncertainty to such an extent that it is difficult to use these projections as a basis for analyzing workforce requirements. Second, DOD and Army policies have resulted in the transfer of Army depot workloads to other government-owned repair facilities and private sector contractors without corresponding reductions in depot facilities and capacity. It is uncertain to what extent workloads will be assigned to Army depots in the future. Third, depot efficiency has been impacted by other factors—lower than anticipated worker productivity, inefficient use of personnel resources, and the timely availability of certain necessary repair parts. Workload estimates for Army maintenance depots vary substantially over time due to the reprogramming of operations and maintenance appropriation funding and unanticipated changes in customer requirements. The Army’s personnel budgets and staffing authorizations are generally based on workload estimates established 18 to 24 months before new personnel are hired or excess employees are terminated. Therefore, if actual workload is less than previously estimated, the depot is left with excess staff. Conversely, if actual workload is greater than previously estimated, the depot would have fewer staff than it needs to accomplish assigned work. Our work shows that workload estimates are subject to such extensive changes that they hamper Army depot planners’ ability to accurately forecast the number of required depot maintenance personnel. In discussing similar issues with Navy shipyard personnel, we noted that in April 1996, the Navy issued guidance to encourage shipyard customers to adhere to the workload plans established during the budget process. Navy leadership found that past weaknesses in workload forecasting contributed to inefficient use of depot resources, which led to higher future operating rates to compensate for previously underutilized shipyard personnel and facilities. After implementing a guaranteed workload program to stabilize work being assigned to naval shipyards, these activities report having 3 years of positive net operating results, after operating at a loss for over 5 years. Appropriated operations and maintenance funding for the depot-level maintenance business area—a key source of depot maintenance funding—is reprogrammed by the Army to a much greater extent than funds for other operations and maintenance appropriation business areas and create challenging fluctuations in workload execution. Table 3.1 shows the amount of depot maintenance funding the Congress appropriated for fiscal years 1996, 1997, and 1998 and the amounts later reprogrammed to cover funding shortfalls in other programs. For comparison purposes, table 3.1 provides the same information for the balance of the Army’s operations and maintenance funding. As indicated, funds for depot maintenance were reprogrammed at a much higher rate than funds for the other operations and maintenance business areas. The non-depot maintenance business areas provide funding for civilian salaries and private sector contractor support—funds that the Army generally has considered must be paid. The depot maintenance programs for the in-house overhaul and repair can be easily terminated without cost to the government. Army officials explained that when depot orders are terminated, financial losses are recovered by charging higher rates to future customers . However, if contracted work is terminated for the convenience of the government, the government often has to pay for expenses incurred by the contractor. While Army officials stated that previous practices resulted in an inequitable distribution of funding transfers, they stated that they planned to conduct future reprogramming actions on a more equitable basis. Unanticipated funding transfers as a result of reprogramming actions have impacted depot staffing and contributed to inefficient depot operations. For example, we estimate Army reprogramming actions moved funding that might have supported about 1,400 direct labor positions and 750 overhead positions in fiscal year 1996. Similarly, reprogramming actions in fiscal year 1997 moved funding that might have supported about 1,125 direct labor positions and 650 overhead positions. These reprogramming actions contributed to net operating losses in the years cited and higher rates in subsequent years. AMC holds semiannual workload conferences to review, analyze, and document depot workload estimates. Our work shows that the command’s estimates can differ significantly from reported spending, limiting their value in documenting personnel budgets and requirements. For example, in September 1994 the predecessor organization to the current Aviation and Missile Command estimated that in fiscal year 1997 it would generate workload requirements and provide funding to the Corpus Christi depot valued at about $161 million for the repair of aviation components. At the beginning of the fiscal year 1997, the projected workload value for that year decreased to $141 million—a 12-percent reduction. Moreover, the funded workload for that year was less than $94 million—a decline of 42 percent from the amount projected almost 3 years earlier. It is important to note that the rates for fiscal year 1997 were developed using the workload estimates projected in 1994. Partially as a result of the decreased workload, Corpus Christi did not receive the revenues it needed to break even. Losses for that year contributed to the need for increased rates in subsequent years. Army officials attributed the decline in forecasted workload to reduced workload requirements resulting from slower-than-expected customer revenues from the sales of repaired items and cash shortages in the Army’s working capital fund. Reduction of work typically results in underutilized personnel and can result in orders being placed for long lead-time parts that are not needed as expected. The workload expected from the Aviation and Missile Command, but not received, might have provided work for about 250 direct labor employees and 150 overhead employees for a year. Workload estimates for overhaul and repair requirements generated by the other military services have also been inconsistent. For example, in September 1995, the Navy estimated that it would provide fiscal year 1998 funding for the overhaul of 38 helicopters at the Corpus Christi depot. In May 1997, the Navy estimated that in fiscal year 1998 it would fund the overhaul of 22, but in October 1997, it estimated the funded workload that would likely materialize during fiscal year 1998 would support the overhaul of only 12. Navy officials told us the estimated helicopter overhaul requirements were reduced, in part, because the Army was unable to complete prior year funded repair programs within agreed time frames. Additionally, the Navy is exploring ways to have future overhaul and repair work done incrementally by either contractor or government employee field teams working at Navy bases. The Navy believes the incremental overhaul and repair process can be done more expeditiously. At this point, it is unclear what role the Corpus Christi depot will play in providing future overhaul and maintenance support for Navy helicopters. Figures 3.1 and 3.2 depict the fiscal years 1997 and 1998 funding estimates for the Corpus Christi Army Depot at various points in time. For example at the start of fiscal year 1995, the Army anticipated that the Corpus Christi depot would receive fiscal year 1997 funding for workloads valued at $349 million. Two years later, at the start of fiscal year 1997, the estimate increased to $355 million, compared to actual funding of $326 million. On the other hand, at the beginning of fiscal year 1996, the anticipated workload for the depot was valued at about $302.5 million. At the beginning of fiscal year 1998, the anticipated total had risen to about $333.5 million, and in June 1998, estimates of revenues for the year were about $360.5 million. Depot officials pointed out that with these variances in workload, it is almost impossible to set accurate rates or to project with precision the number of employees needed to perform the required work. This experience at Corpus Christi illustrates the challenge depot planners face in projecting personnel requirements when the workload estimates change considerably over the 30 months between the time rate-setting is initiated to the end of fiscal year for which rates have been set. Similarly, under these conditions it is also difficult for budget personnel to set labor-hour rates that will generate the desired net operating result. As part of its overall depot maintenance strategy, the Army has established policies and procedures for assigning potential depot workloads to other government-owned repair facilities and the private sector. These practices have significant cost effectiveness and efficiency implications for the depots, given the amount of excess industrial capacity that exists. First, AMC has authorized performance of depot-level workloads at government-owned repair sites located on and near active Army installations and at National Guard facilities. Second, Army policies and strategic plans emphasize the use of the private sector for depot-level maintenance workloads, within existing legislative requirements. In recent years the Army’s Forces Command and its Training and Doctrine Command. have operated an increasing number of regional repair activities at active Army installations. Additionally, the Army National Guard operates regional repair activities at state-owned National Guard sites. Collectively, these repair activities are categorized as integrated sustainment maintenance (ISM) facilities. Sustainment maintenance includes repair work on Army equipment above the direct support level, including general support and depot-level support tasks. Accordingly, Army headquarters has allowed some ISM sites to perform depot-level workloads under special repair authorities. ISM repair sites are staffed by a mixture of military and civilian federal employees, state employees, and contractors. AMC officials stated that ISM repair sites can perform depot-level work to save transportation costs, expand employee skills and capabilities, and shorten repair cycle times. We noted that many of the items requiring depot repair are being shipped to other bases’ ISM repair sites, under a center of excellence program that is designed to assign work to the most cost effective repair source. We did work at Army ISM facilities located at Fort Campbell, Kentucky, and Fort Hood, Texas, and an Aviation Classification Repair Activity Depot operated by the Connecticut National Guard. We noted that each facility was performing depot-level work that was similar, and sometimes identical, to work currently being conducted at the Corpus Christi Army Depot. For example, each repair site operated environmentally-approved painting facilities large enough to strip and repaint an entire helicopter—a task also being conducted at the Corpus Christi depot. Further, the National Guard facility was refurbishing Blackhawk helicopters—a task identical to work currently assigned to the Corpus Christi depot. Additionally, each facility will undergo or has recently undergone expansion and modernization. For example, the Fort Hood repair facility, which was constructed in 1994 at a reported cost of about $60 million, is scheduled for further expansion, and the National Guard facility was recently doubled in size at an estimated cost of $20 million. ISM repair sites are not working capital fund activities. Repair work at these sites is financed through direct appropriations to the operational units, which obligate a level of funding at the beginning of the year. Field-level personnel believe they get a better value for repair work that is performed at the unit level than at the depots and prefer to use field level repair whenever they can. The continuing reliance and expanded use of regional repair facilities for depot-level workloads could have a substantial impact on the future viability and efficiency of operations at the Army’s public sector depots. While the overall impact on the depots’ workloads has not been estimated, an AMC report shows that in fiscal year 1996, ISM and similar repair facilities received at least $51 million for depot-level tasks. AMC personnel told us they believe the actual amount of depot-level work is much higher because not all depot-level tasks and related work is reported. Further, DOD’s 1998 logistics strategic plan envisions the eventual elimination of the public depot infrastructure by expanding the use of regionalized repair activities across all levels of maintenance and contracting more workloads. Lastly, an AMC reorganization proposal suggests that the current Corpus Christi Army Depot functions could be transferred to the four National Guard Aviation Classification Repair Activity Depots. In commenting on a draft of this report, DOD stated that the Army approves Special Repair Authorities to enable regional repair facilities to conduct specific depot-level maintenance tasks for a specified number of items, after it evaluates the impact on depot workloads and core capabilities. However, our work shows that some Special Repair Authorities were granted for varying numbers of items to be repaired over prolonged time frames creating some uncertainty over how well the long-term impact on depot workloads and core competencies may have been assessed. Some Army officials told us that Army reviewers have historically had little incentive to recommend disapproval of proposed Special Repair Authorities since they would likely be overruled by higher headquarters. More recently, Army headquarters officials told us they began to reject a number of proposed Special Repair Authorities and that they are undertaking a study to reevaluate the Special Repair Authorities process. DOD strategic plans and policies express a preference for assigning depot-level workloads to the private sector rather than public sector depots. Recent DOD policies and plans show that DOD expects to increasingly outsource depot maintenance activities, within the existing legislative framework. For example, the DOD logistics strategic plan for fiscal years 1996 and 1997 envisions that it will develop plans to transition to a depot-level maintenance and repair system relying substantially on private sector support to the extent permitted under the current legislative framework. The 1998 plan states that DOD will pursue opportunities for eliminating public sector depot maintenance infrastructure through the increased use of competitive outsourcing. Further, in March 1998 we reported, overall, DOD is moving to a greater reliance on the private sector for depot support of new weapon systems and major upgrades, reflecting a shift from past policies and practices, which generally preferred the public sector. In that regard, the Secretary of the Army has announced plans to pursue several pilot programs that would make the private sector responsible for total life-cycle logistics support, including depot-level maintenance and repairs. DOD policy also emphasizes the use of private sector contractors for modifications and conversions of weapon systems. For example, in August 1996 the Army awarded a multiyear contract for the upgrade of Apache Longbow helicopters. While it is difficult to predict the number of depot maintenance jobs affected by this policy, the Army Audit Agency reported in June 1998 that the Apache Longbow modification, conversion, and depot maintenance workload will likely involve from 2,063 to 2,998 personnel. In June 1998, the Secretary of the Army identified two weapon systems—the Apache helicopter and the M109 combat vehicle—to potentially pilot test prime vendor support concepts. Under this concept, private sector firms would provide total life-cycle supply and maintenance support. It is uncertain if or when these prime vendor contracts will be awarded, or what impact this would have on future workload and staffing of Army depots. We identified several factors contributing to depot inefficiency, including (1) the less-than-expected productivity, (2) excess depot capacity, (3) the lack of flexibility to shift workers among different functions, and (4) the nonavailability of parts. Additionally, we have previously reported that the Army’s current repair pipeline is slow and inefficient and could be improved by implementing various private sector best practices, several of which are being considered at the Corpus Christi depot. Although DOD’s depot productive workyear standard for depots was 1,615 hours, for fiscal year 1997, each of the Army depots reported productive levels below the standard (see table 2.3). Additionally, at the Corpus Christi depot, we noted that the hours required to complete depot maintenance projects exceeded the standard, which serves as the basis for payment, resulting in significant losses for that fiscal year. The most significant productivity problem at the Anniston depot appeared to be that the expected levels of work that had been programmed did not materialize, including work that was expected to transition from the Red River and Letterkenny depots as a result of BRAC decisions. Anniston officials said they were reluctant to eliminate positions since the additional work should show up during 1998. Thus, in the short term, the workforce did not have enough work to keep it fully employed. At Corpus Christi, the inability to complete work within scheduled time frames was a problem. As previously discussed, the use of large amounts of sick leave and annual leave and more holiday leave than other depots contributed to this problem. At the same time, we noted that this depot used premium pay in the form of overtime to a much greater extent than other Army depots. We also noted that specific projects at Corpus Christi had consumed significantly more hours than projected, resulting in financial losses and schedule delays. For example, on average, depot employees charged 22,422 direct labor hours for each Seahawk helicopter repaired, compared to the projected goal of 12,975 hours per aircraft. In commenting on a draft of this report, DOD officials stated that this situation was caused by a variety of factors, including lack of access to Navy managed parts, lack of experience with some Navy-unique systems, and the fact that Navy helicopters were in worse physical condition than most comparable Army helicopters being inducted for overhaul work. Cumulative financial losses on the completed overhaul and repair of 29 Navy Seahawk helicopters are estimated at about $40.1 million, and total reported losses on completed Navy helicopters exceed $80 million. Recognizing these problems, the Army has implemented a process reengineering plan to reduce the average repair cycle from the current 515 days to 300 days. As previously noted, the Navy is considering shifting repair work to field teams at Navy units. Since Navy work is about 30 percent of Corpus Christi’s workload, the depot could lose 400 to 500 direct labor positions and increase its estimated future operating rates by about $20 per hour. Similarly, time charged against the overhaul of the T-53 engines used on the Huey helicopter was about 52,000 direct labor hours for 60 engines, compared to the projected goal of about 23,000 hours. The Army is considering plans to contract with the private sector for the performance of this work. At this time, it is uncertain what role, if any, the depot will have in future T-53 engine repair programs. While the Army has not clearly articulated its long-range plans for its five depots, in the past it has stated that only three are needed, and more recent actions suggest that number may be even smaller. As discussed in a 1996 report, each of the five remaining depots has large amounts of underutilized production capacity which require substantial financial resources to support. For example, the Army recently reported that its depots have capability to produce about 16 million hours of direct labor output, given the current plant layout and available personnel. The report also states that in fiscal year 1998 depots will produce an estimated 11 million hours of direct labor output, meaning that 68 percent of the available plant equipment and personnel are fully utilized on a single shift, 40-hour week. Further, the depots are capable of producing even greater amounts of work. Until recently, no attempt had been made to look at maintenance capability from a total Army perspective, including capability at the field level and in the National Guard. In commenting on a draft of this report, DOD cited several examples of efforts that they are starting to analyze maintenance requirements from a total Army perspective. For example, the ISM concept is designed to integrate and coordinate maintenance provided by active Army units, Army reserve activities and the Army National Guard installations. In addition, DOD stated that the Army will establish a Board of Directors to manage and coordinate depot-level maintenance from a total Army perspective. Improved systems and procedures for shifting workers between different organizational units and skill areas would offer better opportunities to effectively use limited numbers of maintenance personnel. Depot officials noted that prior practices made it difficult to transfer workers between organizational units and skill areas to adjust for unanticipated work stoppages caused by changes in work priorities, parts shortages, technical problems, or temporary labor imbalances. For example, in late 1997 work was suspended on repair of the T-53 engine at Corpus Christi due to a safety of flight issue, but personnel in that shop were not reassigned to other areas whose work was behind schedule. Depot workers are trained in specific technical areas and perform work within their specific specialty code and organizational units. Agreements between the unions and the depots generally require that workers be assigned work only in their specialty areas; therefore, depot managers have limited capability to move workers to other areas. Depot managers noted that, in some cases, a worker could work in another area under the direction of a qualified specialist in the second skill area. Union officials at one depot stated that members understand the benefits of more flexible work agreements, but in the past have been reluctant to adopt them. Depot managers cited a number of ongoing efforts that should, in the future, lead to more effective use of skilled depot workers. For example, depot managers said they were encouraging their workers to take courses during their off-duty time to develop multiple skills. Further, depot officials said completion of an ongoing AWPS system enhancement project will provide an automated database reflecting the specific skills of each depot employee to facilitate identification of workers with the skills that are needed to meet short-term labor imbalances. Lastly, depot mangers are considering changes to organizational structures to better facilitate movement of skilled workers between shops. In discussing this issue with Navy officials, we were told that when the Navy transferred civilians from the Pearl Harbor Shipyard to an intermediate activity at the same location, they implemented a program known as multi-crafting or multi-skilling through which workers trained in a second, complementary skill area so that they were qualified to do more tasks. Workers in seven different workload combination areas were involved in the program and received training in multiple skill areas. In the rubber and plastics forming skill area, cross-trained workers got a pay raise in addition to the satisfaction of knowing they were multi-skilled and more valuable employees. Maintenance facility managers said that the added flexibility of multi-skilling allowed them to use a limited number of workers more cost effectively and to be more responsive to emerging requirements. While we have not evaluated the extent to which the use of multi-crafting and multi-skilling has improved the efficiency of the Navy’s combined operations, in concept it is in line with best practices employed by the private sector and appears to have merit. In commenting on a draft of this report, DOD stated that the Army’s direct labor personnel can become multi-skilled through support of the labor unions. They noted that while depot managers have the right to assign employees to specific work areas, they need to work with labor organizations to adopt more flexible work arrangements through collective bargaining or other partnering arrangements. Parts shortages have also contributed to inefficient depot operations. For example, we previously reported on the length of time it took to repair and ship parts and an Army consultant recently reported that repair technicians spend as much as 40 percent of their time looking for required parts. Army depots obtain parts from a variety of sources, including the Defense Logistics Agency, inventory control points operated by the military services, the private sector through local purchases, and limited depot manufacturing. Since Army procedures give higher priority in processing orders for parts to operational units and field-level repair activities, parts shortages are more likely to occur at the depot level. Further, parts shortage problems could increase as a result of a recent AMC headquarters decision attempting to eliminate parts inventories that have been procured for future depot use. For example, Corpus Christi maintains an inventory at a reported value of about $37 million for emergent work. AMC plans to have the depots turn in the material without giving a financial credit, a process that could cause the depots to report a financial loss equaling the inventory’s value. Officials at the Corpus Christi depot expressed concern that, without this inventory, their access to aviation parts, especially those that have long leadtimes to order, will deteriorate even more as will their ability to complete their work in a timely manner. According to a Corpus Christi official, depot workers waited an average of 144 days from the time they placed requisitions with the Defense Logistics Agency until orders were received. Additionally, a large number of requisitions placed by the Corpus Christi Army Depot for parts managed by a Navy-operated inventory control point were initially rejected because the automated requisition processing system had not been modified to recognize the Army depot as a valid customer. Although depot supply support depends largely on external sources, Corpus Christi Army Depot has taken actions to address the inefficiencies in the portions of the process they control. For example, a recent study by an Army consultant concluded that the material management process costs the depot an estimated $19 million per year and that a large percentage of these costs represents nonvalue added time spent handling, sorting, retrieving, inspecting, testing, and transporting parts between various local storage locations. A depot official estimated that the process reengineering plan, initiated in May 1997, will reduce the administrative costs by $10 million. Some of these initiatives include reducing (1) the average time required to obtain parts from the local automated storage and retrieval system from 12 to 4 days, (2) the time required to complete local purchase actions from 121 to 35 days, and (3) the number of days to complete local credit card purchases from 49 to 10 days. Even though the Army has made progress in building an automated and more rigorous process for analyzing and documenting personnel requirements, important enhancements remain to be completed. Moreover, other severe problems—including significant fluctuations in funding, rising costs and continued losses in the Army’s military depots—create much instability and uncertainty about the effectiveness and efficiency of future depot operations. Some reductions in the amount of work assigned to the military depots has occurred while such work performed by private sector contractors has increased. Further, by adding to its maintenance infrastructure at Army operational units in the active and guard forces and performing depot-level and associated maintenance at those locations, the Army has been adding to the excess capacity, underutilization, and inefficiency of its depots. The extent and financial impact of this situation is unknown. However, the Army is clearly suboptimizing use of its limited support dollars, and efforts are needed to minimize the duplications and reduce excess infrastructure. The Army needs to adopt reengineering and productivity improvement initiatives to help address critical problems in existing depot maintenance programs, processes, and facilities. We recommend that the Secretary of Defense require the Secretary of the Army to establish policy guidance to encourage AMC customers to adhere to workloading plans, to the extent practicable, once they are established and used as a basis for the development of depot maintenance rates; require reevaluation of special repair authority approvals to accomplish depot maintenance at field activities to determine the appropriateness of prior approvals, taking into consideration the total cost to the Army of underutilized capacity in Army depots; encourage depot managers to pursue worker agreements to facilitate multi-skilling or multi-crafting in industrial facilities; and direct the depot commanders to develop specific milestones and goals for improving worker productivity and reducing employee overtime rates. DOD concurred with our recommendations and described several steps being taken to address our recommendations. For example: AMC recently reemphasized the importance of realistic and stabilized workload estimates to optimize depot capacity utilization, stabilize operating rates, and support future personnel requirements determinations. DOD stated that it recently initiated “A Study of the Proliferation of Depot Maintenance Capabilities” to include an examination of the current approval process for Special Repair Authority requests. DOD stated its intention to work in concert with the Army and other Services to pursue efforts to eliminate excess industrial capacity through future BRAC rounds and facilities consolidation. DOD concurred with our recommendation to pursue multi-skilling or multi-crafting, but stated that such arrangements require implementation by individual depot managers. We have revised our recommendation accordingly. While DOD agreed with our recommendation for developing milestones and goals for improving the efficiency of its depot operations to include reductions in employee overtime rates, it did not specify what actions were planned. We also incorporated technical comments where appropriate. The Army plans to begin installing the new AWPS in its manufacturing arsenals in December 1998. However, it is not clear how effective the system will be in terms of identifying the arsenals’ personnel requirements—given the uncertainty surrounding their future workload requirements. The arsenals are also confronted with larger problems and uncertainties that could diminish the effectiveness of the Army’s efforts to automate the process of determining workforce requirements, stabilize its workforce, and increase productivity. At these facilities there have been significant workload reductions as a result of defense downsizing and increased reliance on the private sector. However, commensurate reductions have not been made to arsenal facilities. The arsenals have sought to diversify to improve the usage of available capacity and reduce their overhead costs, but limitations exist on their ability to do so. The Army is considering converting its two arsenals to government-owned, contractor-operated facilities. However, key questions, such as the cost-effectiveness and efficiency of this option, remain unanswered. The Army plans to begin installing the AWPS system in its two weapons manufacturing arsenals beginning in December 1998 and to complete that installation by September 1999. In June 1998, the Army began installing a prototype AWPS at one of its eight ammunition storage and surveillance facilities. Upon completion of the prototype testing, the Army plans to extend the system to the two weapons manufacturing arsenals. Since the end of the Cold War, workloads and employment at the two remaining arsenals have declined substantially; however, operating costs have continued to escalate as fixed costs have been spread among increasingly smaller amounts of workload. Additionally, personnel reductions have not kept up with workload reductions. At Rock Island, the workload dropped a reported 36.9 percent between 1988 and 1997 while the staffing dropped 30.8 percent. At Watervliet the reported workload dropped 64 percent during the same period while staffing dropped 51.8 percent. As workloads continue to decline, the arsenals have been left with relatively fixed overhead costs, including the salary expenses for an increasing percentage of overhead employees. For example, as of fiscal year 1998, the Watervliet Arsenal reported employing 409 direct labor “revenue producers” and 473 overhead employees compared with 1,089 direct labor workers and 924 overhead employees reported 10 years ago. Table 4.1 compares the arsenals’ workloads in direct labor hours and employment levels at the end of fiscal years 1988 through 1997 and projections for fiscal year 1998. Currently, the arsenals are using only a small portion of their available manufacturing capacity in the more than 3.3 million square feet of reported industrial manufacturing space. An arsenal official estimated that as of April 1998 the Watervliet facility was utilizing about 17 percent of its total manufacturing capacity—based on a single 8-hour shift, 5-day workweek—compared with about 46 percent 5 years ago and about 100 percent 10 years ago. Similarly, as of July 1998, officials at the Rock Island Arsenal estimated the facility was utilizing about 24 percent of its total manufacturing capacity compared with about 70 percent 5 years ago and about 81 percent 10 years ago. Underutilized industrial capacity contributes to higher hourly operating rates. Over the last 10 years, the hourly rates charged to customers increased by about 88 percent at Watervliet and about 41 percent at Rock Island. The Arsenal Act (10 U.S.C. 4532) was enacted in 1920 and provides that the Army is to have its supplies made in U.S. factories or arsenals provided they can do so on an economical basis. The act further provides that the Secretary of the Army may abolish any arsenal considered unnecessary. The importance of the arsenals as a manufacturing source has declined over time. The declining workload noted in table 4.1 is a reflection both of defense downsizing in recent years as well as increased reliance on the private sector to meet the government’s needs. In recent years, the Army has pursued a policy of contracting out as much manufacturing work as possible to the private sector. When work was plentiful for both the arsenals and the private sector during the Cold War years, the allocation of work in accordance with the Arsenal Act was not an issue. However, the overall decline in defense requirements since the end of the Cold War has substantially reduced the amount of work needed. When making decisions based on the Arsenal Act, the Army compares public and private sector manufacturing costs to determine whether supplies can be economically obtained from government-owned facilities—a process referred to as “make or buy”. The comparison is based on the arsenals’ marginal or additional out-of-pocket costs associated with assuming additional work. However, the arsenals report little use of the “make or buy” process. For example, Watervliet reported that it has not participated in a “make or buy” decision since 1989 and has not received any new work through the Arsenal Act since at least then. Rock Island officials could identify only one item for which it received new work through the Arsenal Act in recent years. Officials at both arsenals said they do not expect to receive any future work as a result of “make or buy” analyses. As their workloads have declined, the arsenals have become less efficient, because each remaining direct labor job must absorb a greater portion of the arsenals’ fixed costs. As noted earlier, rates charged to customers have increased significantly in recent years at both arsenals. Some efforts have been made to diversify into other manufacturing areas to better use excess capacity and reduce costs, but limitations exist. AMC headquarters has proposed converting the two arsenals to GOCO facilities. However, key questions—such as how much of this type of capacity is needed, and the cost-effectiveness of the various alternatives—remain unanswered. Unlike maintenance depots, where workload is largely centrally allocated by Army headquarters, arsenal managers market their capabilities to identify potential military customers and workloads. Similar to private sector business, arsenal managers recover operating expenses through sales of products that produce revenues. However, as their volume of work declines, the arsenals must either reduce costs or increase prices to customers. If prices are increased, customers may go elsewhere to satisfy their needs, further exacerbating the declining workload problem. Recent proposals by the Watervliet Arsenal to balance workload and staffing were disapproved by Army headquarters in anticipation of new workloads. However, Watervliet officials stated that, as of October 1998, no new work had materialized and none was expected. This lack of new work could result in greater losses than planned at that facility. Each year arsenal personnel estimate the amount of work they expect to receive and then use this information as a basis for projecting personnel requirements. The expected workload is divided into various categories based on the estimated probability of workload actually materializing. Work that is already funded is categorized as 100 percent certain. Unfunded work is categorized based on its considered probability of becoming firm. Watervliet, for example, uses three probability categories for unfunded workloads: 90, 60, and 30 percent. Staffing is then matched to the workload probability. Staffing needs for fully funded work and work with a 90-percent probability is allocated at 100 percent of the direct labor hour requirements. Staffing requirements for the remaining work is allocated in accordance with the workload probabilities. In October 1997, AMC headquarters gave Watervliet approval to eliminate 98 positions by the end of fiscal year 1998. Also, on the basis of an expected decline in workload in fiscal year 1998, AMC headquarters gave the Rock Island Arsenal approval in May 1998 to eliminate 237 positions for a total arsenal workforce reduction of 335 positions. Employees who voluntarily retire or resign will receive incentive payments, based on a varying scale with a maximum payment of $25,000. These incentives were intended to reduce the number of employees facing involuntary separations. By the end of September 1998, 54 Watervliet and 146 Rock Island employees had accepted incentive offers. As an additional incentive to encourage voluntary separations, the arsenals, in August 1998, received authority to offer early retirements to eligible employees. Both arsenals have tried to develop new areas of work because their traditional weapon-making roles no longer provide enough work to allow them to operate efficiently. For a number of years, Rock Island has been fabricating and assembling tool kits, maintenance trucks, and portable maintenance sheds for the Army, other military services, and civilian agencies. Rock Island personnel involved in this work made up about 22 percent of the arsenal’s total employment in fiscal year 1998. Watervliet has tried to branch out into making propulsion shafts for Navy ships and has done contract work for private industry, making such things as ventilator housings and other metal fabrication items. The Rock Island facility is still selling exclusively to government customers. 10 U.S.C. 4543 requires that the arsenals cannot sell items to commercial firms unless a determination is made that the requirement cannot be satisfied from a commercial source located in the United States. However, section 141 of the 1998 Defense Authorization Act provides for a pilot program enabling industrial facilities including arsenals during fiscal years 1998 and 1999 to sell articles to private sector firms that are ultimately incorporated into a weapon system being procured by DOD without first determining that manufactured items are not available from commercial U.S. sources. As a part of the Army’s plan to reduce personnel positions under the Quadrennial Defense Review, the Army plans to study the cost benefits of converting the arsenals to GOCO facilities. The AMC plans to initiate commercial activity studies for converting arsenal operations in fiscal year 1999. These studies will be conducted under the guidelines specified by OMB Circular A-76. According to an AMC official, the Army has determined that the government should retain ownership of the arsenals; however, operational responsibility could be assigned to a private sector contractor. As a first step in the process, the arsenals are to develop proposed staff structures, documenting the government’s most efficient operating strategy, and commercial offerors will be asked to submit proposals for operating the government-owned facility. A source-selection panel will compare the government’s proposal with offers from private sector contractors. According to an AMC official, if the source-selection panel determines that a private sector offeror would provide the most cost-effective solution, nearly all remaining government employees at the arsenals would be terminated by 2002. If recent workload declines and the consequent workforce reductions at the Rock Island and Watervliet arsenals continue, the long-term viability of these facilities is uncertain. Arsenal workloads have declined to the point that, even with significant personnel losses, their capabilities are significantly underutilized and greatly inefficient. An important part of the future decision making process will be analyzing the cost efficiency of government-owned and -operated facilities compared to the cost efficiency of GOCO facilities. If retention of a government-owned and -operated facility is found to be the most cost-effective option, then decisions will be needed that adjust capacity to better match projected future workload requirements. We recommend that the Secretary of Defense require the Secretary of the Army to (1) assess the potential for improving capacity utilization and reducing excess arsenal capacity, and (2) evaluate options for reducing costs and improving the productivity of the remaining arsenal capacity. DOD concurred with each of our recommendations. It agreed that the Watervliet and Rock Island Arsenals currently support considerable amounts of excess manufacturing capability and stated that both facilities are included in current AMC plans to conduct a complete installation A-76 review to identify the most cost-effective option for future operations, including an evaluation of options for reducing costs and improving productivity. The synergy of the issues discussed in this report highlights a broader and more complex message regarding the effect of unresolved problems that impact the future of industrial operations currently performed in the Army. It also affects the cost-effectiveness of support programs for current and future weapon systems. These problems include the need to (1) clearly identify the workload requirements if‘capabilities are to be maintained in-house, (2) consolidate and reengineer functions and activities to enhance productivity and operating efficiencies, and (3) reduce excess capacity. Resolution of these problems requires that they be considered within the legislative framework pertaining to industrial operations. We have previously cited the need for improved strategic planning to deal with logistics operations and infrastructure issues, such as those affecting the Army’s industrial facilities. The Army faces difficult challenges in deciding what, if any, depot-level maintenance and weapons manufacturing workloads need to be retained in-house to support national security requirements. The 1998 DOD Logistics Strategic Plan states that, in the future, DOD will advocate the repeal of legislative restrictions on outsourcing depot maintenance functions by developing a new policy to obtain the best value for DOD’s depot maintenance funds while still satisfying core capability requirements. Until DOD and the Congress agree on a future course of action, it will be difficult to plan effectively for dealing with other issues and problems facing DOD and the Army’s maintenance programs and systems. If the decision is made to retain certain amounts of in-house depot and arsenal capabilities, it will be important to look at overall maintenance infrastructure, including below depot as well as depot-level maintenance requirements in active as well as reserve forces, to ensure that the minimum level is retained that meets overall military requirements. Consolidation of existing activities, to the extent practicable, within the constraints of operational requirements, will be essential for developing a more efficient and cost-effective support operation. Further, improvement initiatives to address long-standing productivity issues are key to providing required maintenance capability for the least cost. Finally, the elimination of excess capacity—both in the public and the private sector, is another critical area that, if not addressed, will continue to adversely affect the cost of Army programs and systems. A number of statutes govern the operations of Army depots and arsenals. For example: 10 U.S.C. 2464 provides for a DOD-maintained core logistics capability that is to be GOCO and that is sufficient to ensure the technical competence and resources necessary for an effective and timely response to a mobilization or other national emergency, 10 U.S.C. 2466 prohibits the use of more than 50 percent of funds made available in a fiscal year for depot-level maintenance and repair work to contract for the performance of the work by nonfederal personnel. The definition of depot-level maintenance and repair is set forth in 10 U.S.C. 2460, 10 U.S.C. 2469 provides that DOD-performed depot-level maintenance and repair workloads valued at $3 million or more cannot be changed to contractor performance without the use of competitive procedures for competitions among public and private sector sources, 10 U.S.C. 2470 provides that depot-level activities are eligible to compete for depot-level maintenance and repair workloads, and 10 U.S.C. 4532 requires that the Army have its supplies made in factories and arsenals of the United States, provided that they can produce the supplies on an economic basis. DOD has stated that its depot maintenance initiatives would continue to operate within the framework of existing legislation. On the other hand, it has, in the past, sought repeal of these and other statutes and has stated in the DOD Logistics Strategic Plan that it will continue to pursue this option. For several years, we have stated that DOD should develop a detailed industrial facilities plan and present it to the Congress in much the same way that it presented its force structure reductions in the Base Force Plan and Bottom-Up Review. Our observations regarding the need for a long-term plan for Army industrial facilities parallels observations we made in our February 1997 high-risk report on infrastructure. In that report, we credited DOD for having programs to identify potential infrastructure reductions in many areas. However, we noted that the Secretary of Defense and the service secretaries needed to give greater structure to these efforts by developing a more definitive facility infrastructure plan. We said the plan needed to establish milestones and time frames and identify organizations and personnel responsible for accomplishing fiscal and operational goals. Presenting the plan to the Congress would provide a basis for the Congress to oversee DOD’s plan for infrastructure reductions and allow the affected parties to see what is going to happen and when. The need for such a plan is even more important given that the issue of eliminating excess capacity in the industrial facility area is likely to raise questions about the ability of DOD’s ability to accomplish this objective absent authority from the Congress for additional BRAC rounds. While the Congress has not approved additional BRAC rounds mainly due to concerns about the cost and savings, timing of new rounds, and other issues, it has asked DOD to provide it with information concerning the amount of excess capacity on its military installations and information on the types of military installations that would be recommended for closure or realignment in the event of one or more additional BRAC rounds. DOD’s report to the Congress on this subject provided most, but not all, of the information requested by the Congress. While this report indicates that significant excess capacity remains in the Army’s industrial facilities, more needs to be done to fully identify the extent of excess facilities before any future BRAC round. In particular, the services must identify opportunities to share assets, consolidate workloads, and reduce excess capacity in common support functions so that up-front decisions can be made about which service(s) will be responsible for which functions. We noted that resolution of these issues would require strong, decisive leadership by the Secretary of Defense. In another 1997 report, we recommended that the Secretary of Defense require the development of a detailed implementation plan for improving the efficiency and effectiveness of DOD logistics infrastructure, including reengineering, consolidating, and outsourcing logistics activities where appropriate and reducing excess infrastructure. In response, the Secretary of Defense stated that DOD was preparing a detailed plan that addressed these issues. In November 1997, the Secretary issued the Defense Reform Initiative Report, which contained the results of the task force on defense reform established as a result of the Quadrennial Defense Review. While this report was a step in the right direction and set forth certain strategic goals and direction, it did not provide comprehensive guidance. Further, the report did not resolve long-standing questions concerning what work in the depots and arsenals is of such importance that it should be performed in-house. Sorting out this issue becomes even more complicated when one introduces the prospect of moving toward GOCO facilities, which seem to fall somewhere between a pure in-house and a total contracted-out operation. Also, for the depots, existing policies do not address the situation involving the proliferation of depot-like facilities at regional repair sites, within both the active and reserve components, and the impact that this proliferation has on excess capacity and increased costs to the government for its total maintenance activities and infrastructure. Uncertainties exist about the future economy and efficiency of depot and arsenal operations and the extent to which the functions they perform need to be performed by the government. In this context, recent experiences at the Army’s maintenance depots and arsenals indicate that the Army is facing multiple, difficult challenges and uncertainties in determining staffing requirements, and in improving the efficiency and effectiveness of its industrial activities. Further, the Army’s industrial facilities currently have significant amounts of excess capacity and that problem is aggravated because of the proliferation of maintenance activities below the depot level that overlap with work being done in the depots. Increased use of contractor capabilities without reducing excess capacity also affects this situation. Productivity limitations suggest the need to reengineer operations retained in-house to enable Army industrial activities to operate more economically and efficiently. The Army has inadequate long-range plans to deal with issues such as those currently affecting the Army’s industrial facilities. Such a plan would need to be developed in consultation with the Congress and within the applicable legislative framework in an effort to reach consensus on a strategy and implementation plan. We continue to believe such an effort is needed if significant progress is to be made in addressing the complex, systemic problems discussed in this report. We recommend that the Secretaries of Defense and the Army determine (1) the extent to which the Army’s logistics and manufacturing capabilities are of such importance that they need to be retained in-house and (2) the extent to which depot maintenance work is to be done at regular depots, rather than lower-level maintenance facilities. We recommend that the Secretary of the Army develop and issue a clear and concise statement describing a long-range plan for maximizing the efficient use of the remaining depots and arsenals. At a minimum, the plan should include requirements and milestones for effectively downsizing the remaining depot infrastructure, as needed, and an assessment of the overall impact from competing plans and initiatives that advocate increased use of private sector firms and regional repair facilities for depot-level workloads. If a decision is made to retain in-house capabilities, we also recommend that the Secretary of the Army develop a long-term strategy, with shorter term milestones for improving the efficiency and effectiveness of Army industrial facilities, that would, at a minimum, include those recommendations stated in chapters 2 through 4 of this report. DOD concurred with each of our recommendations and discussed actions it has completed, underway, or planned as appropriate for each recommendation. Among the key actions that DOD identified are: a study to assess the Army’s overall maintenance support infrastructure to determine what functions need to be retained in-house to include its five depot-level repair activities and the recently expanded regional repair facilities; establishment of a board of directors to oversee and manage the Army’s total maintenance requirements process, including the allocation of work to in-house and contractor repair facilities; and development of a 5-year strategic plan for maximizing the efficient use of remaining maintenance depots and manufacturing arsenals. Fully implemented, these actions should lead to substantial improvements in the economy and efficiency of Army depot and arsenal operations.
Pursuant to a congressional request, GAO reviewed: (1) the Army's basis for personnel reductions planned at its depots during fiscal years 1998-1999; (2) the Army's progress in developing an automated system for making maintenance depot staffing decisions based on workload estimates; (3) factors that may impact the Army's ability to improve the cost-effectiveness of its maintenance depot's programs and operations; and (4) workload trends, staffing, and productivity issues at the Army's manufacturing arsenals. GAO noted that: (1) the Army did not have a sound basis for identifying the number of positions to be eliminated from the Corpus Christi Depot; (2) this was particularly the case in determining the number of direct labor personnel needed to support depot workload requirements; (3) Army efforts to develop an automated workload and performance system for use in its depots have proceeded to the point that required certification to Congress of the system's operational capability is expected soon; (4) however, system improvements that are under way would enhance the system's capabilities for determining indirect and overhead personnel requirements in Army depots; (5) other issues and factors affecting the Army's basis for workload forecasting or the cost-effectiveness of its depot maintenance programs and activities are: (a) an increased reliance on the use of regional repair activities and contractors for work that otherwise might be done in maintenance depots; (b) declining productivity; (c) difficulties in effectively using depot personnel; and (d) nonavailability of repair parts; (6) use of the arsenals has declined significantly over the years as the private sector has assumed an increasingly larger share of their work; (7) according to Army officials, as of mid-1998, the Army's two weapons manufacturing arsenals used less than 24 percent of their industrial capacity, compared to more than 80 percent 10 years ago; and (8) the Army's depots and arsenals face multiple challenges and uncertainties, and the Army has inadequate long-range plans to guide its actions regarding its industrial infrastructure.
Headed by a presidentially appointed and Senate-confirmed director, NIH comprises 27 ICs and an Office of the Director. NIH’s ICs both conduct and support biomedical research specific to their unique missions, which generally focus on a specific disease, a particular organ, or a stage in life (e.g., childhood). Each of the ICs has its own director and staff, as well as its own advisory council or board, which helps to support and oversee the IC’s work. Within the Office of the Director are offices responsible for issues, programs, and activities that span NIH components, particularly research initiatives and issues involving multiple ICs. NIH’s biomedical research that focuses on humans—its clinical research studies—includes clinical trials of biomedical or behavioral interventions such as new drugs, medical treatments, and surgical procedures and devices. Clinical trials are divided into four phases. In phase I clinical trials, which typically include 20 to 80 people, researchers test a new biomedical or behavioral intervention on human subjects for the first time to evaluate safety. In phase II clinical trials, the intervention is given to a larger group of people, 100 to 300 participants, to further evaluate efficacy and safety. In phase III clinical trials, the intervention is given to even larger groups—from several hundred to several thousand participants—to compare the intervention to commonly used or experimental interventions. Finally, phase IV studies are conducted after the intervention has been marketed, in order to gather information on long-term use. NIH’s ICs support clinical trials predominantly through “extramural research”—awarding funds to researchers at universities or other research entities (awardees) through grants, contracts, and cooperative agreements. Of NIH’s 27 ICs, almost all fund extramural research projects. These ICs use a standard peer review process to inform the final decisions on which extramural research projects to fund. The size and composition of the ICs’ clinical trial portfolios vary substantially, depending on such factors as the IC’s budget, mission, and the scientific goals of any given study. For example, some ICs support few if any phase III clinical trials. In fiscal year 2014, NIH’s ICs reported funding of nearly $30 billion for all biomedical research. Of that amount, NIH estimates that—based on reporting categories HHS developed for use by all of its agencies— $23.9 billion (80.3 percent) funded research related to the health of both women and men, $4 billion (13.2 percent) funded research related to women’s health, and an estimated $1.9 billion (6.4 percent) funded research related to men’s health. (See Appendix II for more details on estimated fiscal year 2014 funding for selected diseases and conditions of particular relevance for women.) To determine these amounts, NIH annually assigns its research funding to certain women’s health disease and condition categories—such as breast cancer or heart disease. Additionally, NIH classifies this funding as either related to the health of both sexes or as supporting research on women’s health only or men’s health only by using HHS’s calculation guidelines. NIH reports these funding estimates in HHS’s annual congressional budget justification and in ORWH’s biennial Report of the Advisory Committee on Research on Women’s Health. The Revitalization Act required NIH to ensure the appropriate inclusion of women in NIH-funded clinical research, including clinical trials. The Revitalization Act contained provisions that required NIH to, among other things, ensure that women are included in all NIH-funded clinical research, and report on compliance with the inclusion provisions of the Act. Additionally, the Revitalization Act requires NIH to ensure that clinical trials are designed and carried out in a manner sufficient to allow for valid analysis of the extent to which the outcomes measured in the trial affect women differently than men. NIH subsequently determined that this particular requirement only applied to phase III clinical trials, as they are the largest clinical trials involving human subjects and the closest to effecting broad changes in public health policy and standards of care. The Revitalization Act also directed NIH to develop guidelines for including women in clinical research and report biennially to Congress on NIH’s compliance with this policy. The resulting Inclusion Policy, the current version of which has been in place since October 2001, requires NIH applicants conducting clinical research to 1. design research plans that detail the breakdown of their studies’ participants by sex and provide a rationale for their planned enrollment for all clinical research studies; and 2. include plans for analyzing outcomes for potential sex differences for NIH-defined phase III clinical trials, when appropriate, as determined by consideration of prior scientific evidence. In addition to NIH ICs, several NIH offices play a role in implementing the Inclusion Policy, particularly ORWH, which was established in 1990—and codified in the Revitalization Act—to promote women’s health research, and OER, which administers and manages NIH grants policies, operations, and data systems. Since November 2011, the implementation of NIH’s Inclusion Policy has been overseen by the Subcommittee on Inclusion Governance, which comprises senior NIH officials from ORWH, OER, and several ICs. This committee, co-chaired by the ORWH director and staffed by the NIH Inclusion Policy Officer from OER, is charged with examining and considering current NIH policies related to the inclusion of women in NIH-funded clinical research. In addition, the implementation of the Inclusion Policy is monitored by the Advisory Committee on Research on Women’s Health (Women’s Health Advisory Committee), whose creation was mandated by the Revitalization Act, as well as by the individual IC’s directors and advisory councils or boards. During the peer review process, applicants’ plans for including women, as appropriate, are reviewed and assessed along with the applicants’ plans to meet other requirements or considerations that are outlined in the funding opportunity announcement or research solicitation. The outcome of these assessments—scores from the peer reviewers—inform the funding decisions made by the ICs. Prior to awards being made, program officers or other IC staff may advise awardees on additional information required before an award can be released, and the resolution of any concerns raised during the peer review stage—including concerns related to adherence to the Inclusion Policy. After awards are made, NIH’s awardees are responsible for managing their day-to-day activities in accordance with NIH requirements, and the IC making the award is responsible for the awarded funds and for monitoring progress and compliance with NIH policies, including the Inclusion Policy. IC program officers monitor awardees through a variety of methods—including reviews of reports and correspondence from the awardee, and site visits—to identify potential problems with scientific progress, compliance, and areas where technical assistance might be necessary. One such report that program officers review is the annual progress report—which includes data on study enrollment, among other things. Awardees submit information, including enrollment data, through the Electronic Research Administration (eRA) Commons, which is part of NIH’s electronic data collection and grants administration system that is used by awardees and program officers to access and share administrative information related to research awards. NIH data show that over the last decade more women than men have been enrolled across all NIH-funded clinical research, including phase III clinical trials. NIH publicly reports aggregate enrollment numbers on a biennial basis; however, it does not routinely make detailed IC enrollment data readily available or examine more detailed enrollment data by disease and condition, in order to identify potential challenges to enrolling women in certain research and disease or condition areas. NIH requires each awardee to report enrollment, including enrollment by sex, for each of their NIH-funded research awards, including phase III clinical trials. As of fiscal year 2015, these data are reported to NIH through its Inclusion Monitoring System—one part of NIH’s awardee data system. Program officers review these data at least annually to determine whether actual enrollment is consistent with the enrollment planned in the research design. The program officers use a designated checklist, among other tools, to document their monitoring as part of the annual progress report review process. According to NIH officials, awardee enrollment data are aggregated by each IC and presented in and discussed during meetings held by each IC’s advisory board or council, which are open to the public. The IC-level enrollment data are certified as being compliant with the Inclusion Policy by the IC’s advisory board or council and by the IC Director and included in an IC-level enrollment report. The certified IC enrollment reports are submitted to ORWH and OER, where, according to NIH officials, the data are checked for consistency and errors as part of a quality control process. NIH aggregates the enrollment data across the agency and reports this aggregate data to the Women’s Health Advisory Committee, Congress, and the public in NIH’s biennial inclusion report. According to the data collected by NIH, in each fiscal year from 2005 through 2014, more women than men were enrolled in all NIH-funded clinical research studies, including phase III trials. (See fig. 1.) Specifically, in fiscal year 2014, among all NIH-funded clinical research studies, 57 percent of enrollees (16.4 million) were women, and 39 percent (11 million) were men. For all NIH-funded phase III clinical trials, in fiscal year 2014, 60 percent of enrollees were women (about 480,000) and 39 percent were men (about 314,000). At the individual IC level, data show that for each IC, women’s enrollment for all clinical research studies, including phase III clinical trials, was generally higher than men’s enrollment in most years from fiscal year 2011 through fiscal year 2014. (See app. III for enrollment data at the NIH and IC level over this period.) Of the 25 ICs reporting enrollment in fiscal years 2011 through 2014, 13 ICs enrolled more women than men in the ICs’ clinical research studies during all four years. An additional 4 ICs enrolled more women than men in 3 of the fiscal years between 2011 and 2014. According to NIH, 10 of the 25 ICs regularly support phase III clinical trials, and of these, 3 (about one third) enrolled more women than men in each year in fiscal years 2011 through 2014, and another 2 enrolled more women than men in 3 of the 4 fiscal years. NIH collects and reviews aggregated enrollment data from the ICs; however, NIH officials do not make these IC-level enrollment data readily available to the public or other interested parties. Specifically, the IC-level enrollment data are not published as part of the overall NIH biennial report on enrollment, are not shared with the Women’s Health Advisory Committee, and are not available for download from the ORWH website. Individual official IC enrollment reports were also not available through the websites of the three ICs in our review. Beginning in the fiscal year 2013-2014 reporting period, NIH required all ICs to submit their IC enrollment report numbers in a standard format, which may allow for easier public sharing of these data going forward. Additionally, NIH officials do not routinely examine detailed enrollment data by sex beyond the IC level—such as by a specific research area or disease or condition being studied—to identify potential challenges to enrolling women in these areas, because enrollment data are currently not available in this format. The guidance NIH provided to the ICs for submitting their fiscal year 2013-2014 enrollment data to ORWH and OER explains that ICs may choose to further break out their reported enrollment data by disease area, portfolio area, or in some other manner. According to NIH officials, certain ICs analyze enrollment data by various disease categories when necessary. However, the IC officials we spoke with told us that their data systems are not capable of systematically aggregating enrollment data in this manner for routine reporting. When asked if it would be possible to aggregate enrollment data by disease and condition, IC officials said that they would be able to do so, but it would be a time intensive, manual process given current data system limitations. Further, NIH officials told us they expect NIH’s new enrollment data system deployed in October 2014 to increase functionality in examining enrollment data in different ways, but as of July 2015 the officials did not have specific plans or details available. In addition, NIH officials stated that because an individual IC’s research generally focuses on a specific disease, a particular organ, or a stage in life, the enrollment data that are aggregated at the IC level would roughly correspond with major disease and condition categories and could be used, to some extent, as a proxy for disease and condition enrollment data. However, this proxy method does not take into account the fact that many ICs are responsible for research that includes multiple diseases, organs, or stages in life; for example, NHLBI’s research portfolio includes studies of heart, lung, and blood related diseases and conditions. In addition, research on certain diseases and conditions—such as obesity—falls under the purview of multiple ICs. NIH’s practices of not sharing the IC-level enrollment data and not examining detailed data on enrollment by sex—by specific research area or the disease or condition being studied—are inconsistent with several federal standards for internal control. Specifically, the internal control standards for information and communications state that for an entity to run and control its operations, it must have relevant, reliable, and timely communications relating to internal as well as external events. Information is needed throughout the agency to achieve all of its objectives, and effective communication should occur in a broad sense, with information flowing down, across, and up the organization. In addition to internal communications, management should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders who may have a significant impact on the agency achieving its goals (e.g., the Women’s Health Advisory Committee). Federal internal control standards for monitoring call for management to assess the quality of agency performance over time and ensure that the findings of audits and other reviews are promptly resolved. Because NIH does not readily share IC-level enrollment data with the public and other interested parties, such as the Women’s Health Advisory Committee—through means such as the ORWH website or the biennial NIH enrollment report—those interested in reviewing IC-level enrollment information would have to attend—or watch online if webcast—each individual IC’s advisory board or council meeting or specifically seek out or request any public record resulting from these meetings to have access to these data. In addition, by not routinely examining more detailed enrollment data that is aggregated by sex—such as data at the disease and condition level—NIH is limited in its ability to identify whether women are sufficiently represented in studies in specific areas that cross ICs—such as obesity. Further, NIH does not have information of sufficient detail to monitor and determine if the aggregate enrollment data from across NIH inadvertently mask low enrollment for particular research areas or diseases or conditions. At an April 2015 Women’s Health Advisory Committee meeting, some committee members raised such concerns, noting that published studies on clinical trials of specific diseases and conditions, such as cardiovascular disease, appeared to show that women’s enrollment was lower than the enrollment that NIH had reported in the aggregate. The committee members acknowledged that there could be many reasons for such discrepancies, but noted that they would like to see more detailed enrollment data to improve their understanding of the data and ensure that women are being appropriately included in NIH-funded clinical trials. NIH’s Inclusion Policy requires that individual awardees conducting phase III clinical trials consider whether analysis of potential differences in study outcomes between women and men is needed in their studies, consistent with the Revitalization Act’s provisions regarding the design of certain clinical trials. However, the agency does not maintain, analyze, or report summary data to oversee whether analysis of outcomes by sex are planned or conducted, when applicable, across all NIH-funded clinical trials. Under the Inclusion Policy, applicants seeking funds for phase III clinical trials must consider prior scientific evidence and assess whether an analysis of potential sex differences is merited, and if so, develop a plan to analyze study results accordingly. Both this consideration and the plan for analysis, if appropriate, are to be included in the awardee’s application for funding. To ensure awardees’ compliance with this requirement, NIH officials told us they rely on the agency’s peer review process for reviewing applications, and after awards are made, on IC program officers’ monitoring of individual awardees. NIH has guidelines for peer reviewers to use when assessing applications and rating applicants’ inclusion plans during peer review. This review typically includes an evaluation of the proposed study design and assessment of whether any plans for conducting an analysis of potential sex differences are “acceptable” or “unacceptable.” The assessment and rating are based on consideration of prior scientific evidence that either supports or negates the existence of differences in outcomes by sex. Peer reviewers document this assessment in a summary statement provided to the ICs for final award determinations. According to NIH officials, if reviewers determine that an applicant’s plans are not acceptable, the applicant is barred from funding until the plans are addressed and deemed acceptable by IC officials. After awards are made, program officers from the ICs that fund the studies are responsible for monitoring awardees’ overall progress on an ongoing basis. Specifically, program officers are to review awardees’ annual progress reports, which the Inclusion Policy states should address analysis of potential sex differences, as appropriate. Officials from one IC told us that, through their regular interactions with awardees, program officers are very familiar with the design of their awardees’ trials and whether any such analysis is planned or underway. NIH program officers monitor individual awardees’ compliance with the analysis requirement of the Inclusion Policy; however, the agency lacks summary data on awardees’ analysis plans, including the percentage of awardees in a given year with trials designed to identify potential sex differences, when applicable. Currently, program officers review awardees’ progress reports—including any information reported regarding the analysis of potential sex differences—but do not have means, such as a written checklist or a required field in an electronic reporting system, for recording the information obtained through this monitoring, as they do for monitoring enrollment. NIH’s awardee data system includes information on whether individual awards include phase III trials. However, the data system does not have a data element that denotes whether an awardee’s study should include or has plans for an analysis of potential sex differences. Although such information is included in the narrative included in awardees’ funding applications, this information cannot be easily aggregated for summary reporting, NIH officials explained, and therefore they do not estimate the proportion of trials being conducted at any one time that are designed and explicitly intended to identify differences in study outcomes by sex. NIH officials also told us that they plan to add a question for this type of monitoring to the existing electronic checklist used by program officers in the fall of this year, to be implemented for awards funded in fiscal year 2016. Because NIH does not have summary data regarding the analysis requirement of the Inclusion Policy, it has not reported summary information on this aspect of the Inclusion Policy to key stakeholders— including the Women’s Health Advisory Committee and the Congress. Notably, NIH’s agency-wide biennial reports on the status of the Inclusion Policy do not include information on the extent to which NIH-funded phase III trials included plans to conduct analyses of potential sex differences, or on the overall status of this aspect of the Inclusion Policy. Instead, the report focuses primarily on NIH-wide aggregate enrollment. NIH officials told us they rely on program officers’ monitoring of individual awardees to ensure that the analysis requirement of the Inclusion Policy is being implemented appropriately after awards are made, because part of the program officer’s role is to ensure satisfactory scientific progress as well as compliance with NIH policies. The officials added that they were not sure of the utility of summary reporting in this case. NIH’s lack of summary data and reporting regarding the analysis requirement of the Inclusion Policy conflicts with federal internal control standards. First, federal internal control standards require that federal agencies have control activities in place to ensure that management’s directives are carried out and that these controls are monitored. These standards also state that information should be recorded and communicated to management and other responsible officials in a form and within a time frame that enables them to carry out their internal control and other responsibilities. Without summary data on the funded phase III clinical trials that are intended to provide information on potential sex differences—including the number of such trials funded in a given year—senior NIH officials are limited in their ability to effectively oversee the implementation of the Inclusion Policy to assess whether changes are needed to their procedures. Further, NIH cannot provide this information to stakeholders such as the Women’s Health Advisory Committee and Congress in its regular reporting on other aspects of the Inclusion Policy. As a result, these stakeholders lack assurance that the agency is implementing the Inclusion Policy as intended and in a manner consistent with the Revitalization Act’s provisions regarding the design of certain clinical trials. In its fiscal year 2011-2012 Report of the Advisory Committee on Research on Women’s Health, NIH previously acknowledged that inclusion is not just a matter of having women and men included in clinical studies; rather, the scientific value of research studies is greatly enhanced by providing knowledge about differences and/or similarities between different populations affected by the diseases under study. However, without assurance that its clinical trials are being designed and conducted as directed under the law and its implementing policy, NIH’s insight regarding the interpretation, validation, and generalizability of findings resulting from the research it supports—as these findings apply to both women and men—is diminished, potentially limiting the value of NIH-funded research. NIH has developed and proposed a policy intended to increase public reporting of clinical trial results by requiring all NIH-funded clinical trials to be registered and have results submitted to its registry and results database, ClinicalTrials.gov. ClinicalTrials.gov contains accessible and searchable information on publicly and privately supported clinical trials and observational studies that is provided and updated by NIH awardees and other researchers. Currently, all NIH awardees are encouraged to register their trials with ClinicalTrials.gov at the beginning of the study, but only certain trials—those of certain drugs and devices regulated by the Food and Drug Administration—must register and provide summary results as required by law. According to NIH officials, as of July 2015, there were approximately 195,000 studies registered in ClinicalTrials.gov, and almost 18,000 of these have summary results information posted. Under the proposed NIH policy, all NIH-funded clinical trials, regardless of trial phase or type of intervention being studied, would have to be registered and submit summary results—including participant flow, baseline demographics such as the sex and age of participants, primary and secondary outcomes, and adverse events—to ClinicalTrials.gov. According to NIH, if the proposed policy goes into effect, compliance may be enforced through possible suspension or termination of funding and noncompliance could impact future funding decisions. NIH sought public comments on the proposed policy from November 2014 through March 2015 and, as of August 2015, was analyzing the comments it received. NIH anticipates that the final policy will be issued in the first quarter of fiscal year 2016. NIH officials told us that the proposed policy for clinical trial registration and results submission was not intended to increase reporting of sex- specific results to ClinicalTrials.gov; however, the officials also said that there is the potential for more reporting of sex-specific results, given the overall increase expected in the number of reported studies. Since the proposed policy would require awardees to report results for their prespecified primary and secondary outcome measures as part of the summary results submission, the reporting of sex-specific results would depend on the design of the trial, according to NIH officials. Specifically, if sex differences were among the prespecified primary and secondary outcomes studied in a specific trial, officials said, then that would be reflected in the results submitted to ClinicalTrials.gov. In issuing the proposed policy, NIH stated that its awardees are expected to make their trial results available to the research community and to the public at large in order to contribute to scientific knowledge and, ultimately, public health. NIH proposed the policy partly in response to a recent study, which found that within 30 months of trial completion, the results of less than half of NIH-funded clinical trials had been published in a peer-reviewed biomedical journal, the traditional method for sharing results. NIH stated that because journal publication of clinical trials results is not always possible, it is important to provide other ways for clinical trial results to be disseminated and publicly available to researchers, health care providers, and others. NIH has made efforts to encourage sex-specific reporting of clinical trial results. In NIH’s fiscal years 2011-2012 Report on the Advisory Committee for Research on Women’s Health, ORWH stated that it is only through sex-specific reporting that full information becomes available to the public and to scientists who can then use such data to inform future studies, thereby building the knowledge base in a manner that takes into consideration the influences of sex on health and disease. Specifically, NIH has worked with journal editors and others to encourage reporting of results by sex. Specifically, in 2011, NIH asked IOM to convene a workshop of researchers, journal editors, and others on the topic to discuss the importance of reporting results by sex and the implications of this issue for journals’ reporting policies. According to Stanford University’s Gendered Innovations project, 32 peer-reviewed journals worldwide have editorial policies requiring that clinical trial researchers include information on results by sex when they submit articles for publication. However, editors from one medical journal that we spoke with stated that when evaluating whether results should be reported by sex, it is important to consider whether examining sex differences is a primary outcome of the study, and whether the trial was big enough for a valid subgroup analysis—i.e., analysis of the effect of the intervention on two or more different groups of participants, such as women and men. They emphasized that if a study was not designed for a subgroup analysis by sex and one was performed, the results could be erroneous. NIH has also made efforts to facilitate the sharing of clinical trials results, including sex-specific information, through venues other than journals and ClinicalTrials.gov. NIH has a number of policies that promote the dissemination of research results—and the underlying data—and guide awardees in disseminating their results, including the NIH Data Sharing Policy, among others. Additionally, related to the sharing of sex-specific information, NIH hosts a Women’s Health Resources portal and a Women’s Health topic page on the Medline Plus webpage, which includes links to other information about women’s health from journal articles and ClinicalTrials.gov. The agency also reports summaries of research related to women’s health in the biennial Report of the Advisory Committee on Research on Women’s Health. Although NIH has made progress in the 2 decades since the 1993 Revitalization Act regarding the inclusion of women in NIH-funded clinical research, opportunities remain for NIH to further extend the value of its investment in medical research. NIH is responsible for ensuring that the nation receives the greatest benefit of the large federal investment in clinical research by fully implementing its Inclusion Policy, such that women are adequately included in NIH-funded clinical trials when appropriate, and that potential sex differences may be identified. By not readily sharing IC-level enrollment data, NIH limits the public’s and other interested parties’ ability to gain insight into enrollment issues at each of the ICs, putting the onus of obtaining these data on the interested parties themselves to attend or view online up to 25 individual IC board or council meetings or request any public record of these meetings. By not examining more detailed enrollment data—such as data aggregated by research area or specific to various diseases and conditions—NIH cannot know whether it is adequately including women across all of the research it supports. Without this greater insight into enrollment for specific to diseases and conditions, NIH is limited in its ability to assess whether its programs that support cross-cutting research spanning multiple ICs are successfully including women in clinical research or facing challenges that the agency should address. Further, the lack of summary data and reporting about the extent to which awardees plan to conduct or perform analyses of potential sex differences in phase III clinical trials compromises NIH’s oversight and jeopardizes the agency’s ability to provide assurances over the Act’s provisions regarding the design of certain clinical trials and meet the purposes of its Inclusion Policy. Without summary data, such as the proportion of trials being conducted that intend to analyze differences in outcomes for men and women, and reporting on that data, NIH and Congress cannot know whether or to what extent current efforts are helping to ensure that differences in clinical outcomes by sex are identified and that NIH is supporting research that can be used to shape improved medical practices for both women and men. The overall increase in the enrollment of women in NIH-funded clinical research studies, such that women have been a significant proportion of research subjects for nearly 2 decades, is a noteworthy achievement for NIH. To continue to build on this achievement—and consistent with federal internal control standards—NIH should turn its focus to assessing whether the agency is meeting the purposes of the Inclusion Policy, and if it is not, take the needed corrective actions. To ensure effective implementation of the Inclusion Policy in a manner consistent with the Revitalization Act’s provisions regarding the design of certain clinical trials, the NIH Director should take the following five actions: make IC-level enrollment data readily available through public means, such as NIH’s regular biennial report to Congress on the inclusion of women in research, or through NIH’s website; examine approaches for aggregating more detailed enrollment data at the disease and condition level, and report on the status of this examination to key stakeholders and through its regular biennial report to Congress on the inclusion of women in research; ensure that program officers have a means for recording information obtained from monitoring awardees’ plans for and progress in conducting analyses of potential differences in outcomes by sex; on a regular basis, systematically collect and analyze summary data regarding awardees’ plans to conduct analyses of potential sex differences, such as the proportion of trials being conducted that intend to analyze differences in outcomes for men and women; and report on this summary data and the results of this analysis in NIH’s regular biennial report to Congress on the inclusion of women in research. We provided a draft of this product to HHS for comment, and HHS responded with comments provided by NIH. In its written comments, reproduced in appendix IV, NIH generally concurred with our findings and recommendations. NIH also provided technical comments that were incorporated into the final report, as appropriate. In commenting on our first recommendation to make IC-level enrollment data readily available to the public, NIH agreed and indicated that there are opportunities for the agency to increase the accessibility of IC-level enrollment data. NIH also stated that the agency has begun to standardize IC enrollment reporting and will continue this effort by standardizing data tables and graphics for ICs to provide for the NIH-wide biennial reports. NIH did not provide a timeline for making this information readily available to the public. NIH agreed with our second recommendation to examine approaches for aggregating more detailed enrollment information at the disease and condition level, and to report on the status of this examination to key stakeholders. In its comments, NIH also reiterated what we describe in our report: some ICs conduct analysis of enrollment by disease or condition on an as-needed basis. NIH noted that the agency is working on ways to analyze enrollment at the disease and condition level across the ICs. NIH did not provide information on when the agency would be able to analyze these enrollment data, but it did state that when the agency is able to perform the analysis, NIH would make the results readily available through NIH’s biennial inclusion reports or other means. NIH agreed with our third recommendation to ensure that program officers have a means for recording their monitoring of awardees’ plans for and progress in conducting analysis of potential sex differences, and confirmed that the agency plans to add questions that would facilitate this type of monitoring into the existing checklist program officers use to document other types of monitoring beginning in fiscal year 2016. In commenting on our fourth and fifth recommendations regarding collecting and reporting summary data on awardees’ plans for sex- differences analysis, NIH agreed that it is critical to obtain more information on which clinical trials involve analyses of sex differences, and described some alternative data collection approaches for improving oversight of this issue. We maintain that thoughtful, useful analysis and summary reporting would improve NIH’s oversight of this aspect of the Inclusion Policy. Our recommendation was not intended to prescribe or limit the type of analysis performed or the data collected by NIH; instead we provided an example that NIH could adapt as needed, and we encourage the agency to explore the best alternatives for their analyses. In other general comments, NIH also noted other opportunities that support oversight, such as the importance of peer reviewers in examining applicants’ plans for including women prior to funding decisions, and expanded reporting in ClinicalTrials.gov. In addition, the agency noted that ClinicalTrials.gov could engender greater transparency of clinical trial results and help assure that the analyses required under the Inclusion Policy are being completed. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of the Department of Health and Human Services, the Director of the National Institutes of Health, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. We reviewed 34 published journal articles that specifically identified barriers to or reasons for women’s participation in clinical research. We conducted an initial literature search that identified 168 studies published from January 2004 through October 2014 on the topic of women in clinical research. After reviewing abstracts and the full text of some articles, we narrowed this group down to the 34 articles included in this report that focus on the factors specifically affecting women’s participation in clinical research. We also spoke with or received written responses from officials and program officers from 3 institutes and centers (ICs) of the National Institutes of Health (NIH) about the factors affecting women’s participation in clinical research reported by NIH awardees. In our review of the literature, the three most commonly cited barriers women face when considering whether to participate in clinical research were fear of experimentation/trust issues, health related concerns, and transportation/convenience issues. (See table 1.) Fear of experimentation/trust issues included distrust of physicians and the medical community and fear that the experimental treatment may be inferior to the conventional treatment(s). Health related concerns cited in the articles included concerns about the potential side effects or other adverse events associated with the experimental treatment. Some of the transportation/convenience concerns cited in the articles included difficulty traveling to a clinic (no transportation or a long distance to the facility) and general inconvenience associated with participating in the research. The three most common reasons for women’s participation in clinical research identified in our literature review were personal/health benefits, altruism, and a general category of “other” reasons. (See table 2.) Personal/health benefits included access to new treatment and drugs, while “other” reasons included the fact that the research was being done at a clinic where the participant had already received care or was being conducted by clinical staff with whom the participants were familiar. Participation due to altruism included the desire to help science. The NIH IC officials and program officers we spoke with generally agreed that the barriers and reasons for participation that we identified through our literature review were consistent with those they have encountered when working with their awardees. In addition, officials from one IC we spoke with identified other reasons women participate in clinical research, as cited by awardees. These reasons included the opportunity to learn about the disease being studied and how to manage it, and a sense of pride associated with participation. In contrast, these officials stated that for some diseases, the effect or perceived effect on future fertility could be a barrier to women’s participation. To determine the amount of NIH funding for research on women’s health, we collected and reviewed NIH women’s health budget information for fiscal years 2009-2014. We also interviewed officials from NIH and the Department of Health and Human Services (HHS) regarding the budget categories that NIH uses in its women’s health budget, which were originally developed for use in reporting by all HHS agencies. NIH, like all HHS agencies, annually compiles a summary table that estimates funding for women’s health research. Per HHS’s guidance, NIH classifies its funding into 16 categories and about 120 sub-categories representing specific diseases and conditions; and three categories by sex—funding for women’s health research, funding for men’s health research, and funding for research related to both women’s and men’s health. Since the sex categorization of funding is based on enrollment, a portion of funding for research studies for certain diseases that primarily affect women—such as cervical cancer—but are reported as including male participants may be categorized as related to men’s health or both sexes. NIH reports these funding estimates in HHS’s annual congressional budget justification and in the biennial Report of the Advisory Committee on Research on Women’s Health. To more closely examine the amount NIH funded for research on selected diseases and conditions with a particular relevance to women, we developed a list of diseases and conditions using the top 10 diseases and conditions in each of the Centers for Disease Control and Prevention’s 2012 lists of leading causes of death for women and leading chronic diseases for women. We supplemented this information with the diseases and conditions with a particular relevance to women that were identified in a 2010 Institute of Medicine report, Women’s Health Research: Progress, Pitfalls, and Promise. We worked with NIH and several of its institutes and centers (IC) to assign each of the diseases and conditions in our list to the NIH budget categories that were included in the women’s health budget. These matches allowed us to provide an estimate of funding for these selected diseases and conditions, which are shown in Table 3. NIH officials told us that determining the amount NIH funded for women’s health research overall—as well as for specific diseases and conditions— is difficult, and the resulting information are estimates, rather than actual amounts, due to challenges compiling the funding data. These challenges include: (1) methodological issues in assigning research funding to a sex category, especially for basic research, which does not include human subjects, (2) difficulties assigning NIH research funding to broad HHS- determined disease categories, (3) research projects that overlap disease categories, but must be assigned to a single disease category, and (4) variation in data collection processes at the IC level. Tables 4 and 5 below present NIH and IC-level enrollment data, by sex, for fiscal years 2011 through 2014—for all clinical research studies and for phase III clinical trials, respectively. In addition to the contact named above, Karen Doran, Assistant Director; Amanda Cherrin; Emily Loriso; and Julie T. Stewart made key contributions to this report. Jennie F. Apter, Leia Dickerson, Krister Friday, and Jacquelyn Hamilton also contributed to the development of this report.
Women make up over half the U.S. population, but historically have been underrepresented in clinical research supported by NIH and others. As a result, differences in the manifestation of certain diseases and reactions to treatment in women compared with men were not identified. For example, there have been instances of women having adverse effects that differed from those of men related to medications and other treatments. NIH's Inclusion Policy established requirements governing women's inclusion in its clinical research. GAO was asked to provide information on women's participation in NIH research. Among other reporting objectives, GAO examined (1) women's enrollment and NIH's efforts to monitor this enrollment in NIH-funded clinical research; and (2) NIH's efforts to ensure that NIH-funded clinical trials are designed and conducted to analyze potential sex differences, when applicable. To do this, GAO reviewed relevant laws and policies, including the Inclusion Policy, and federal standards for internal control; reviewed and analyzed NIH enrollment data from fiscal years 2005-2014; and interviewed NIH and IC officials and other experts. Data from the National Institutes of Health (NIH) show that more women than men were enrolled in NIH-funded clinical research for fiscal years 2005-2014, but NIH does not make certain enrollment data readily available to interested parties or examine other detailed data to identify potential challenges to enrolling women in specific research and disease or condition areas. In fiscal year 2014, for example, NIH reported that across all of the clinical research studies it funded—including phase III clinical trials, the largest studies involving human subjects— 57 percent of enrollees (16.4 million) were women. NIH collects enrollment data from individual awardees through its Institutes and Centers (IC)—which generally fund studies in different research areas—and publicly reports data on aggregate enrollment as part of its implementation of the Inclusion Policy developed to implement provisions of the NIH Revitalization Act of 1993. However, NIH does not make the IC-level enrollment data from each of the 25 ICs that report data readily available to interested parties, so that interested parties must make an effort to seek out this data. In addition, NIH does not routinely examine more detailed enrollment data, such as enrollment data organized by the disease and condition being studied. As a result, NIH is limited in its ability to identify whether women are sufficiently represented in studies in specific areas—such as cardiovascular disease—or if the agency-wide data inadvertently mask enrollment challenges. By not examining more detailed data on enrollment below the aggregate level, NIH cannot know whether it is adequately including women in all of the research it supports, in a manner consistent with its Inclusion Policy. Further, NIH's reporting and monitoring in this area is inconsistent with federal internal control standards, which call for agencies to have controls to help ensure effective information flow and effective monitoring of agency activities. NIH requires that phase III clinical trial awardees consider whether analysis of potential differences in outcomes between women and men is needed in their studies—one of the key requirements of its Inclusion Policy; however, the agency does not maintain, analyze, or report summary data to oversee whether analysis of outcomes by sex are planned or conducted. NIH officials told GAO that they rely on peer review and program officer monitoring to ensure awardee compliance with the analysis requirement. However, NIH program officers do not have a required field in a reporting system or other means to record the information they collect to monitor awardees' analysis plans and compliance with the Inclusion Policy requirement. In addition, there is no data element in NIH's data system to indicate whether an awardee's study should or does include plans for an analysis of potential differences in research outcomes by sex. As a result, NIH lacks summary data, such as the percentage of awardees in a given year with trials designed to identify potential differences in clinical outcomes by sex. Without this summary information, NIH cannot report this information in the agency's biennial reports to Congress and other stakeholders. The lack of summary data and reporting compromises NIH's monitoring of its implementation of the Inclusion Policy and conflicts with federal internal control standards, which call for agencies to ensure the flow of information about agency activities, provide for internal and external communication, and conduct periodic monitoring. Further, it limits NIH's assurance that it is supporting research that can be used to shape improved medical practice for both women and men. GAO recommends that NIH examine and report more detailed data on women's enrollment in NIH-funded studies, and collect, examine, and report data on the extent to which these studies include analyses of potential differences between women and men. NIH agreed with GAO's recommendations and plans to take action to implement them.
According to DOJ officials, the JAG program provides states and localities with federal funds to support all components of the criminal justice system while providing a great deal of flexibility in how they do so. Recovery Act JAG-funded projects may provide services directly to communities or improve the effectiveness and efficiency of criminal justice systems, processes, or procedures. Like non-Recovery Act JAG funds, Recovery Act JAG awards are to be used within the context of seven statutorily established areas. The seven statutorily established areas and examples of how JAG funds may be used within these areas are outlined in table 1 below. Financial Requirements and Internal Controls DOJ requires that all Recovery Act JAG award recipients establish and maintain adequate accounting systems, financial records, and internal controls to accurately account for funds awarded to them and their subrecipients. Award recipients must also ensure that Recovery Act JAG funds are accounted for separately and not commingled with funds from other sources or federal agencies. If a recipient or subrecipient’s accounting system cannot comply with the requirement to account for the funds separately, then the recipient/subrecipient is to establish a system to provide adequate fund accountability for each project that has been awarded. Recipient Reporting and Performance Measurement Requirements All state and local Recovery Act JAG recipients are required to meet both Recovery Act and BJA quarterly reporting requirements. The Recovery Act requires that nonfederal recipients of Recovery Act funds (including recipients of grants, contracts, and loans) submit quarterly reports, which include a description of each project or activity for which Recovery Act funds were expended or obligated, and an estimate of the number of jobs created and the number of jobs retained by these projects and activities. In particular, the Recovery Act requires recipients to report on quarterly activities within 10 days of the end of each quarter. For Recovery Act JAG grants, BJA has added language in the grant awards that requires that grantees meet the federal reporting requirements and provides sanctions if they do not. Because the Recovery Act JAG program includes a pass- through element, SAAs must gather the required data elements for all pass- through recipients during the same 10-day time frame in order to meet their own reporting requirements. Separately, BJA requires that states and those localities receiving their funds directly through DOJ report on their progress in meeting established performance measures related to funded activities. BJA also requires all Recovery Act JAG recipients to submit an annual programmatic report with narrative information on accomplishments, barriers, and planned activities, as well as a quarterly financial status report as required by the Office of Management and Budget (OMB). In early 2010, after a year-long development and initial refinement period, BJA officially launched a new, online Performance Measurement Tool (PMT) to improve upon its previous grants management system and allow online performance measurement data submission. BJA plans to use the PMT to help evaluate performance outcomes in at least 13 grant programs, including Recovery Act JAG. According to the Standards for Internal Control in the Federal Government, activities need to be established to monitor performance measures and indicators. Such controls should be aimed at validating the integrity of performance measures and indicators—in other words, ensuring they are reliably designed to collect consistent information from respondents. BJA is also planning on using the PMT to assess performance measurement data and direct improvement efforts in 5 additional programs by the end of 2010. However, given that grantees were not required to submit their PMT reports until the second quarter of fiscal year 2010, some grantees did not begin submitting their first completed PMT reports until March 2010. BJA requires Recovery Act JAG recipients to use the PMT for quarterly reporting on their status in meeting the Recovery Act JAG program’s 86 individual performance measures, such as percent of staff who reported an increase in skills and percent of Recovery Act JAG-funded programs that have implemented recommendations based on program evaluation. Recipients of Recovery Act JAG funding receive their money in one of two ways—either as a direct payment from BJA or as a pass-through from an SAA—and they reported using their funds primarily for law enforcement and corrections. According to state officials from our sample states, more than half of the funding that localities received as pass-through awards from their SAAs was obligated specifically for law enforcement and corrections support, while about a quarter of the funds that recipients of direct awards received was dedicated exclusively to law enforcement. Regardless of the source, officials in states and localities reported using Recovery Act JAG funds to preserve jobs and activities that without Recovery Act JAG funds would have been cut or eliminated; however, expenditure rates across states in our sample showed considerable variation. BJA allocates Recovery Act JAG funds the same way it allocated non- Recovery Act JAG funds by combining a statutory formula determined by states’ populations and violent crime statistics with a statutory minimum allocation to ensure that each state and eligible territory receives some funding. Under this statutory JAG formula, the total award allocated to a state is derived from two sources, each given equal value: half of the allocation is based on a state’s respective share of the U.S. population, and the other half is based on the state’s respective share of violent crimes, as reported in the Federal Bureau of Investigation’s (FBI) Uniform Crime Report (UCR) Part I for the 3 most recent years for which data are available. Of such amounts awarded to states, 60 percent of a state’s allocation is awarded directly to a SAA in each of the states, and each SAA must in turn allocate a formula-based share of these funds to local entities, which is known as the “pass-through portion.” BJA awards the remaining 40 percent of the state’s allocation directly to eligible units of local government within the state. The eligible units of local governments that receive direct awards from DOJ either get them individually or as part of awards to “disparate” jurisdictions which jointly use correctional facilities or prosecutorial services. In the cases of the disparate jurisdiction awards, to qualify for funds, the units of local government involved must submit a joint application to DOJ and sign a memorandum of understanding (MOU) outlining how they will share funds. They also are to determine amongst themselves which local government will serve as the fiscal agent, and thereby be responsible for reporting to DOJ on behalf of the others and ensuring that all members of the disparate jurisdiction follow applicable federal financial guidance and meet reporting requirements. The following figure illustrates the participation of localities in a disparate jurisdiction award. In the example, High Point city is the fiscal agent and Greensboro city and Guilford County are both subrecipients. The total awards that DOJ allocates directly to units of local government— the 40 percent share—are to be based solely on the local jurisdiction’s proportion of the state’s total violent crime 3-year average based on reports from the FBI’s UCR Part I. Units of local government that could receive $10,000 or more after the Bureau of Justice Statistics (BJS) analyzes the UCR data are eligible for a direct award from DOJ. Funds that could have been distributed to localities through awards of less than $10,000 are grouped together and then provided to the SAA. Under the JAG program, SAAs and direct grant recipient agencies may draw down funds from the Treasury immediately rather than requiring up-front expenditure and documentation for reimbursement. Such funds are required to be deposited into an interest-bearing trust fund and, in general, any interest income that states and localities earn from the funds drawn down is to be accounted for and used for program purposes. Table 2 shows the total allocation of Recovery Act JAG funding across our sample states, including the grant amounts BJA made directly to the SAAs (the 60 percent share); the number of pass-through grants the SAAs made in turn; and the grant amounts and number of grants BJA made directly to localities (the 40 percent share). The 14 states in our sample received $1,033,271,865 in JAG Recovery Act funds, which was more than half of the funds awarded nationwide for the program. Of the total of 1,338 direct awards that DOJ made to localities in the 14 states in our sample, approximately one-third of these direct awards, or 436, went to disparate jurisdictions and are split by agreement among the designated jurisdictions. Under these arrangements, one jurisdiction functions as the prime recipient and fiscal agent who is supposed to be responsible for submitting all programmatic or financial reports on behalf of the disparate group as well as monitoring other neighboring localities’ use of funds on activities covered by the grants. In our sample states, while one-third of the total number of direct grant awards were made to disparate jurisdictions, these arrangements accounted for 72 percent of the funds DOJ awarded directly to local recipients. For example, in Illinois, 100 percent of direct awards were provided to disparate jurisdictions, and in 8 of the other 13 states DOJ awarded more than 70 percent of funds in this manner. Officials we met with in localities that received funds under this type of arrangement reported that they provided varying amounts of oversight in there role as fiscal agent. The DOJ Inspector General has raised the oversight of subgrantee awards as an issue for DOJ’s attention and has recommended that DOJ develop further training for recipients; DOJ concurred with the recommendation. Table 3 summarizes the distribution of direct award funds to disparate jurisdictions in our sample states. The 14 SAAs in our sample received more than $630 million collectively as their share of the Recovery Act JAG funds. JAG statutory provisions require that each state pass-through no less than a specific designated minimum percentage of the funds that they receive as subgrants to localities, municipal governments, and nonprofit organizations. Among our sample states, this mandatory pass-through percentage varied from a high of 67.3 percent in California to a low of 35.5 percent in Massachusetts. SAAs are also allowed to retain up to 10 percent of the funds that they receive for administrative purposes. The completion of these pass-through award processes occurred at different rates across the 14 states that we sampled and resulted in some states expending their Recovery Act JAG funds faster than others. As of June 30, 2010, the SAAs we reviewed had made nearly all of their pass-through awards, with the exception of Mississippi and Pennsylvania. In addition, many local pass-through recipients reported that there was a time lag in being reimbursed by their SAAs for funds that they had spent. Additional information on amounts drawn down and expended is included in appendix IV. According to Recovery.gov, the SAAs and localities that received grant funds directly from DOJ in our sample of 14 states were awarded approximately $1.028 billion in Recovery Act JAG funds. This amount represents about 52 percent of the nearly $2 billion awarded to SAAs and directly funded localities across the nation. As of June 30, 2010, the SAAs and the directly funded localities in our sample expended over $270.7 million or about 26.4 percent of the total amount awarded. Recovery Act JAG fund recipients may spend their respective awards over a 4-year period. As depicted in figure 2 below, in the 14 states in our sample, the expenditure of Recovery Act JAG funds generally lags behind the amount of funds awarded by the SAAs and drawn down. For example, as of June 30, 2010, California—whose SAA received the largest direct award in our sample—had expended only about $6.6 million of the $135 million, or nearly 5 percent, of JAG grant funds the state received. Texas reported expending the most—more than $37 million—after combining expenditures the SAA made independently with the expenditures made by the more than 400 pass-through recipients. California SAA officials stated they delayed in awarding JAG funds because of the design of two new programs focused on probation and drug offender treatment services that accounted for $90 million of the $135 million in grant funds the SAA received. As of June 30, 2010, 100 percent of California’s subrecipients were finalized through grant award agreements, but many projects have recently become fully operational resulting in the slow expenditure of funds which are handled on a reimbursement basis. In Pennsylvania, SAA officials said the state faced two challenges in expending Recovery Act JAG funds quickly: (1) a state budget impasse, which delayed the allocation of Recovery Act JAG awards; and (2) Recovery Act JAG funding for state projects focused on technology costs, which require lengthy procurement times. Further, they noted that state pass-through funding to localities is recorded on a quarterly basis after expenses are incurred, so the pace of expenditure could be somewhat misleading. Other SAA officials we contacted cited additional reasons for more slowly expending Recovery Act JAG funds. For example, all of the SAAs we contacted have procedures in place that require subrecipients to make their purchases up-front with local funds and request reimbursement from the SAA after documentation is received. Two states we contacted have policies that restricted Recovery Act JAG funding to shorter time limits with an option for renewal rather than providing localities authority to use grants during the 4-year grant period applicable to the initial recipient of the grant. In addition, 1 of the 14 SAAs had a preference to retain Recovery Act JAG funds and expend funds gradually in longer-term projects, such as technology improvements, as allowed during the 4-year grant period. Using funds received through direct and pass-through awards, all states reported using Recovery Act JAG funds to prevent staff, programs, or essential services from being cut. In addition, local officials reported that without Recovery Act JAG funding law enforcement personnel, equipment purchases, and key local law enforcement programs would have been eliminated or cut. SAAs reported that they passed through about 50 percent of their funds and collectively they planned to use the largest share—about 30 percent, or almost $168 million—for law enforcement purposes. Direct recipients reported that funds were most often to be used for multiple purposes. Officials from all states in our sample reported using Recovery Act JAG funds to prevent staff, programs, or essential services from being cut. Also, 19 percent of localities in GAO’s sample, or officials in 12 of 62 localities, provided specific examples of ongoing local law enforcement programs or activities, such as juvenile recidivism reduction programs, prisoner re- entry initiatives, and local foot or bicycle patrols in high-crime neighborhoods that would not have continued without the addition of these funds. Table 4 provides some examples that state and local recipients reported regarding how they used Recovery Act JAG funds to help them preserve jobs and essential services. SAAs reported that they awarded the largest share—about 30 percent, or almost $168 million—for law enforcement purposes, such as hiring or retaining staff who might otherwise have been laid off, or purchasing equipment in direct support of law enforcement activities, as shown in figure 3. In addition, SAAs reported awarding approximately 24 percent, or more than $137 million, to support corrections programs or activities. SAAs reported allocating the smallest share for crime victim and witness programs, 2.1 percent or approximately $11.8 million. Within the category of law enforcement, equipment expenditures spanned a wide range of law enforcement gear, but vehicles and weapons purchases were often reported. Frequent types of purchases included: police cruisers; weapons, such as TASERs, and ammunition; communications devices, such as hand-held two-way radios, and mobile laptops in police cruisers; and safety equipment, such as protective vests and shields. See appendix V for examples of selected equipment purchased with JAG funds. Overall, localities in 13 out of the 14 states we contacted reported using Recovery Act JAG funds to maintain positions or pay officer overtime for activities related to law enforcement. Individual SAAs, however, reported obligating their Recovery Act JAG funds in a variety of ways as shown in table 5. The percentages do not include the funds that the SAAs retained for administrative purposes or funds not yet awarded. Nearly all SAAs in our sample states, except for Iowa, which reported using most of its funds to support drug enforcement activities, reported using Recovery Act JAG funds to support law enforcement activities. With the exception of Iowa, at the state level the share of Recovery Act JAG funds used to support direct equipment purchases and personnel expenses ranges from a high of 65.8 percent in Texas to a low of 1.7 percent in New York. Localities in more than a third of the states in our sample (5 of 14) reported that uncertainties about the availability of future JAG funding steered them toward one-time equipment purchases, such as the procurement of license plate readers and in-car laptop computers, rather than investments, such as hiring new personnel, that would require an ongoing commitment of funds and whose sustainability could be threatened when Recovery Act JAG funds expire. In addition, officials in about a quarter of the localities in our sample (15) discussed how they coordinate the use of their Recovery Act JAG funds with resources that they received from other federal funding streams. For example, the cities of Austin, Texas and Greensboro, North Carolina were each waiting to receive a separate federal grant specifically for the purpose of hiring police officers so that they could determine whether to spend Recovery Act JAG funds to equip the officers once hired. See figure 4 for an interactive map with additional information on Recovery Act JAG funds purchases and activities in our sample states. As shown in figure 5, data reported by direct recipient localities in the 14 states that we sampled indicate that they obligated the largest share— more than 63 percent, or over $256 million—for multiple purposes and 21.5 percent, or about $86.8 million, to directly support law enforcement programs or activities. Program planning, evaluation, and technology improvement funds, which accounted for approximately 8 percent of spending, were primarily used to enhance communications equipment or purchase computer hardware and software for all types of criminal justice agencies and programs. Based on the information grantees reported to Recovery.gov, the number of the projects reported has dropped slightly over the last three reporting periods since projects that are completed discontinue reporting. This was the case most often when funds were used for discrete equipment purchases, such as law enforcement vehicles, laptop computers in police cars, or weapons. A majority of the SAA officials we interviewed said that workload demand and personnel shortages made meeting Recovery Act mandated deadlines within the prescribed reporting period difficult. Section 1512(c) of the Recovery Act requires that each Recovery Act award recipient submit a report no later than 10 days after the end of each quarter to the federal awarding agency. In the case of Recovery Act JAG, the federal awarding agency is DOJ. The Section 1512(c) report that Recovery Act recipients, such as Recovery Act JAG recipients, are required to submit must contain the following data: (1) the total amount of recovery funds received from the federal awarding agency; (2) the amount of recovery funds received that were expended or obligated to projects or activities; and (3) a detailed list of all projects or activities for which recovery funds were expended or obligated. All 14 SAAs we contacted said that they had the necessary systems in place to account for Recovery Act JAG funds received and that subrecipients were generally in compliance with their financial reporting requirements. Officials in 10 out of 14 SAAs in our sample specifically cited the Recovery Act’s window of reporting no later than 10 days after the end of each quarter as challenging. Officials in 8 out of 14 SAAs in our sample said that meeting federal Recovery Act reporting requirements increased staff workload and about one-third of the SAAs told us that personnel shortages have created challenges in their abilities to specifically meet Recovery Act reporting deadlines. For example, officials for one county in Colorado noted that increased reporting responsibilities associated with Recovery Act JAG grants resulted in one full-time staff member spending nearly 2 full work weeks on federal oversight and reporting requirements over a 5 ½-month time frame. Officials noted that the same individual spent 16 hours on reporting requirements for a non-Recovery Act JAG award and a state pass-through award during the same time period. Furthermore, officials in Texas, New York, and Mississippi said they required additional personnel to manage Recovery Act awards and meet reporting requirements. In addition, an official in one SAA also told us that because of short data collection time frames they initially submitted incomplete quarterly data and likely underreported the impact of the Recovery Act JAG program in the first two quarterly 1512(c) reports. While state and local officials we interviewed said that meeting the 1512(c) report’s 10-day time frame remains challenging, none of the states in our sample said that they were unable to meet the 1512(c) reporting deadline. In addition, the number of direct award recipients that completed the report has generally remained constant (around 800) over the three reporting quarters from October 1, 2009, to June 30, 2010. DOJ awarded over 70 percent, or more than $289 million of direct award funds, to 436 disparate jurisdictions. DOJ guidance states that the recipient (i.e., fiscal agent) in each disparate jurisdiction is responsible for monitoring “subawards” and for “oversight of subrecipient spending and monitoring of specific outcomes and benefits attributable to the use of Recovery Act funds by its subrecipients.” DOJ guidance provides detailed information on financial and accounting requirements for direct recipients and subrecipients of DOJ grant programs. The guidance also states that fiscal agents must implement and communicate a policy for reviewing subrecipient data. DOJ guidance, however, does not provide instruction on what a subrecipient monitoring or data policy should include; nor does it state how outcomes and benefits tied to the Recovery Act should be monitored. The DOJ Office of the Inspector General issued a report in August 2010 which included the results of grant audits it performed across 12 state and local recipients of both Recovery Act and non-Recovery Act JAG program funds. The Inspector General found that 7 of the 12 grant recipients had deficiencies in the area of monitoring of subrecipients and contractors. The Inspector General recommended that DOJ’s Office of Justice Programs provide additional training and oversight of JAG recipients to ensure that they establish policies and procedures for monitoring subrecipients’ activities to provide reasonable assurance that subrecipients administer JAG funds in accordance with program guidelines. DOJ concurred with the recommendation that it provide additional training and oversight over the monitoring of subrecipient activities, and plans to review financial training course content to ensure that proper internal control guidance on subrecipient monitoring is included. DOJ anticipates developing a training module specific to subrecipient monitoring by March 31, 2011. All of the SAAs we contacted (14 of 14) reported that they generally shared Recovery Act JAG information, promising practices, or lessons learned with other states and localities using a variety of techniques. Furthermore, DOJ had developed a number of programs that encourage the sharing of information and promising practices. State SAA officials told us that efforts to share information with one another or amongst the localities in their jurisdictions include in-person meetings, telephone calls, e-mail, Web postings, and/or hosting conferences. In addition, the SAA officials told us they find value in sharing information by attending DOJ training sessions and conferences and participating in programs and events sponsored by associations, such as the National Governors Association (NGA), the National Criminal Justice Association (NCJA), and the Council of State Governments (CSG). For example: Texas officials developed an electronic state government grant management and tracking system that they stated is helpful and efficient in managing Recovery Act JAG funds. Texas officials told us they shared the design of this online system with several states. In addition, during BJA conferences and other national training conferences, Texas officials noted that they took the opportunity to discuss with other states the promising practices and lessons learned related to grant management and the administration of JAG funds using their system. Colorado officials said that SAA staff made presentations at national and regional conferences regarding the following: (1) grant management and monitoring of state uses for effective grant administration, (2) various programs the state has funded, and (3) outcomes the state has achieved. SAA officials said that the state encourages subgrantees that have demonstrated successful programs to respond to requests for presenters at state and national conferences. Officials told us that staff from three Colorado Recovery Act JAG subgrantee projects made presentations at the NCJA Western Regional Conference in April 2010. For example, Colorado officials told us that one presentation involved the retraining of probation and parole officers to reduce recidivism by working with other agencies in taking an overall supportive approach to working with ex-offenders that included assistance in such areas as housing, health, and finding work. Ohio officials told us they take the initiative to contact other SAAs to discuss and share experiences, lessons learned, and promising practices regarding problems encountered in administering Recovery Act JAG grants. They also said that NCJA provides SAAs with a forum to share information and challenges associated with administering recovery funds, which Ohio has leveraged. For example, they stated that at the 2010 NCJA Mid-Western Regional Conference that Ohio officials attended, there were sessions where SAAs shared experiences about the administration of Recovery Act funds, as well as were workshops on model projects funded through the Recovery Act. According to Ohio officials, the information was helpful both in terms of planning their own initiatives and in reaffirming decisions they had made regarding Recovery Act and Recovery Act JAG programs. Illinois officials told us that they hosted a 2-day criminal justice planning summit in September 2010 for all state actors in the criminal justice system including Recovery Act JAG practitioners, policymakers, academics, and legislators. According to SAA officials, the focus of the summit was on how to fight crime more effectively in a time of diminishing resources by using the promising evidence-based practices. State summit planners told us that both presentations by state and national experts and workshops focused on implementing promising practices, while the emphasis in follow-up work groups was on producing a long-range criminal justice plan for the state of Illinois. In addition, SAA officials told us that they share promising practices and lessons learned by participating in regional training conferences, Web- based seminars, and/or informational conferences provided by OMB, DOJ, as well as Illinois state agencies. DOJ encourages information sharing through regional training conferences, Web-sites, and Web-based clearinghouses. For example, training meetings and Webinars provide a forum which states find valuable for sharing information and promising practices, according to a majority of (9 of the 14) states we interviewed. In addition, BJA has developed a Web site that illustrates examples of successful and/or innovative Recovery Act JAG programs. The Web site highlights JAG subgrantees and/or statewide projects that BJA believes show promise in meeting the objectives and goals of Recovery Act JAG. In particular, the site describes the planned Illinois criminal justice information strategic planning initiative and summit discussed above. Further, DOJ’s Office of Justice Programs is in the process of developing an informational Web-based clearinghouse of promising practice information for the criminal justice community through a public Web site where researchers, grant applicants, and others may find a list of model programs proven to be effective. According to DOJ officials, it will also be a site that SAAs can use to help find best practices and model programs, thereby funding discretionary programs that show promise based upon evidence. While the focus of the DOJ information- sharing programs is broader than Recovery Act JAG, they offer methods and mechanisms to share information related to program priorities, such as law enforcement, corrections, and technology improvement. SAA officials, in a majority of the states we interviewed, indicated that they were supportive of these efforts. In addition, national associations such as NGA, CSG, and NCJA encourage states to share information and promising practices. The focus of these programs is generally broader than Recovery Act JAG, but some exclusively focus on Recovery Act JAG priorities such as law enforcement, corrections, and technology improvement. For example, BJA has funded NCJA to provide on-site training and technical assistance, Webinars, and regional conferences, and creates and disseminates publications to assist SAAs in developing their statewide criminal justice plans and ensure effective use of Recovery Act JAG funds. NCJA also serves as an information clearinghouse on innovative programming from across the nation, and coordinates information sharing for the justice assistance community. DOJ developed and implemented 86 new performance measures for the Recovery Act JAG program in 2009 and continues to make efforts to improve them, but the current set of performance measures varies in the degree to which it includes key characteristics of successful performance measurement systems. According to DOJ officials, these performance measures are currently being refined in consultation with stakeholders, such as SAAs and the external contractor hired to maintain the PMT. We acknowledge that creating such measures is difficult, given that the performance measurement system is under development, but until these measures are refined, they could hinder the department’s ability to assess and communicate whether the goals of the Recovery Act JAG program are being achieved. In addition, states conveyed mixed perspectives about the utility of DOJ’s performance measurement tool which enables recipients to self-identify activities associated with their grant and then self-report on the relevant set of performance measures under each activity. DOJ has not yet completed development of a mechanism to verify the accuracy of this recipient-reported information in the PMT. From the more than 80 Recovery Act JAG performance measures, we analyzed a nonprobability sample of 19 (see app. II) and found several areas where the measures could better reflect the characteristics that our prior work has shown to support successful assessment systems (see app. III) For example, the 19 Recovery Act JAG performance measures we reviewed generally lacked, in varying degrees, several key attributes of successful performance measurement systems, such as clarity, reliability, linkages with strategic or programmatic goals, objectivity, and the measurability of targets. DOJ officials acknowledge the limitations of the current system and are undertaking efforts to refine Recovery Act JAG performance measures. As we have previously reported, performance measures that evaluate program results can help decision makers make more informed policy decisions regarding program achievements and performance. By including key attributes of successful performance measurement systems into its performance measure revisions, DOJ could facilitate accountability, be better positioned to monitor and assess results, and subsequently improve its grants management. Table 6 describes 5 of 9 key characteristics of successful assessment systems and the potentially adverse consequences agencies face when omitting these attributes from their measurement design. These 5 characteristics—clarity, reliability, linkage to strategic goals, objectivity, and measurable targets—are attributes that may be most effectively used when reviewing performance measures individually. There are 4 others— governmentwide priorities, core program activities, limited overlap, or balance—that are best used when reviewing a complete set of measures. Since we selected a nonprobability sample of 19 measures that were most closely associated with the majority of expenditures, we focused our analysis on the 5 that could be applied to individual measures and did not assess the sample for the other 4 attributes that are associated with an evaluation of a full set of measures. Nevertheless, these 4 attributes also can provide useful guidance when establishing or revising a set of performance measures as a whole. In conducting our analysis, we applied the 5 characteristics most applicable to assessment of individual performance to the 19 measures in our nonprobability sample. Our analysis found that 5 of the 19 measures were clearly defined but the remaining 14 were not, which is inconsistent with DOJ’s guidance to grant recipients for assessing program performance. In particular, DOJ advises that states’ grant programs should have performance measures with “clearly specified goals and objectives.” In addition, 14 of the 19 measures were not linked to DOJ’s strategic or programmatic goals. We also found that while 9 out of the 19 measures were objective, 13 out of 19 were not reliable, and 17 out of the 19 measures did not have measurable targets. In addition to our analysis, we provided a standard set of questions to officials across our sample states seeking their perspectives on how effectively the Recovery Act JAG performance measures evaluate program results. These officials provided their comments about the PMT and raised concerns about how the performance measures lack clarity, reliability, and linkage to strategic goals. From our analysis we determined that 14 out of the 19 measures we analyzed lacked sufficient descriptive detail to facilitate precise measurement. For example, our analysis found that 1 of DOJ’s measures associated with evaluating personnel activities is the “percent of departments that report desired efficiency.” However, for this measure, DOJ’s guidance based on the definition provided in the performance measure lacks key data elements that would make the measure more clear—namely, which departments should be included in the measure or how states and localities should interpret “desired efficiency.” In addition, officials we interviewed from 9 of the 14 SAAs in our sample stated that DOJ’s Recovery Act JAG performance measures were unclear. Some examples of states’ perspectives follow: In particular, an official from the Texas SAA told us that Texas refined its state data collection tool to clarify performance measure guidance and eliminate instances where DOJ rejected data entries because the measure was not clear. As another example, according to Texas officials, one of the DOJ performance measures related to training is “Other forms of training conducted during the reporting period.” However, Texas state officials noted that BJA did not clarify whether this measure would include non-Recovery Act training. As a result, the Texas state data collection tool revised the performance measure for better context and asked for the “the number of other forms of training conducted during the reporting period and paid with ARRA JAG funds.” Other state officials from Michigan and Georgia cited challenges in understanding what is being asked by the 13 measures listed under the activity type, “state and local initiatives.” In particular, one of these states noted confusion and lack of clarity related to the measure, “number of defined groups receiving services,” since in many instances their initiatives were associated with equipment purchases, and it would be difficult to determine who and how many benefited from a new computer system or the acquisition of new ammunition, for example. Ohio and Pennsylvania state officials noted that DOJ uses terminology such as “efficiency” and “quality” that is not clearly defined. Officials we interviewed from another five states stated that they could not understand whether the term “personnel” should include the entire agency or department that was awarded the Recovery Act JAG grant or if it should include only the portion of staff within a department that is directly affected by the funding. When we discussed with DOJ officials our concerns that the performance measure definitions at times lacked clarity, they stated that each was defined, but that further work was being done to solicit feedback from grantees on the measures and their definitions. However, as we discussed above, our analysis determined that 14 out of the 19 measures do not have clear definitions. DOJ officials noted that the department hosts several training opportunities designed to provide grantees opportunities for clarification, including two Webinars every quarter and ongoing field training. DOJ officials also explained that they hired an external contractor to operate the PMT Help Desk to provide grantees guidance from 8:30-5:00 EST. However, officials from three states we contacted noted that while the PMT Help Desk provided useful technical assistance, the Help Desk provided limited guidance to clarify the definition of performance measures. Therefore, officials from these states reported being confused about what to report. In July 2010, we reported that a measure not clearly stated can confuse users and cause managers or other stakeholders to think that performance was better or worse than it actually was. Our analysis showed that 13 out of 19 measures could lead to unreliable findings because respondents could interpret and report on the measures inconsistently. A performance measure is considered reliable when it is designed to collect data or calculate results such that each time the measure is applied—in the same situation—a similar result is likely to be reported. Respondents’ inconsistent interpretation of the measures could preclude using many of the measures as indicators of performance. For example, we found that one measure: “the percent of departments that report desired efficiency,” was measured and reported on differently by different recipients. According to SAA officials in one state, different police department units in a single large metropolitan area counted themselves as separate departments, while according to SAA officials in another state, all police department units were counted collectively as one. In another state, SAA staff stated that BJA’s guidance document for the Recovery Act JAG performance measures did not provide enough instruction to ensure that agencies reported the correct data. For example, the staff said they could not determine whether the PMT measure for “the number of personnel retained with Recovery Act JAG funds during the reporting period” was to include any personnel position paid for with Recovery Act JAG funds during the reporting period, or to represent an unduplicated number of personnel positions retained with Recovery Act JAG funds during the reporting period. Given the confusion, the officials sought and received guidance from the Help Desk on how to interpret and report the measure. Further, officials from 4 of the 14 SAAs in our sample expressed concern about possible inconsistent data entry among the subrecipients of their pass-through grants. For example, officials from Ohio noted that since subrecipients had their own interpretation of how to report on the measures, they believed that there would be a lack of consistency and reliability within the state as well as across all states once BJA attempted to aggregate the responses. In addition, a related issue is how DOJ validates the information states and localities submit in order to ensure that the results the department reports are accurate and reliable. We have previously reported that weaknesses in monitoring processes for verifying performance data can raise concerns about the accuracy of the self-reported data received from grantees. We also reported that if errors occur in the collection of data or the calculation of their results, it may affect conclusions about the extent to which performance goals have been achieved. For example, self-reported performance information that is not reported accurately could provide data that are less reliable for decision making. DOJ officials acknowledged that they have not verified the accuracy of states’ and localities’ self-reported performance data. However, they told us they have been meeting with their contractor to review a draft verification and validation plan, but have not yet implemented a system to verify and validate grantees’ performance data or implement data reliability checks on the performance measures in the PMT. DOJ officials also attributed their challenges to ensuring data integrity to limited resources, stating that they lack adequate full-time staff to improve, develop, and implement performance measures at this time. Specifically, DOJ officials told us that they rely on a contractor because they have only one staff person overseeing states’ and locals’ completion of the measures, and improving and developing the tool. Until a data verification process is in place, DOJ could experience difficulty in ensuring performance results are reported reliably across state and local grantee recipients. DOJ communicated specific Recovery Act goals, such as jobs created or retained, to recipients; but did not provide information on how its Recovery Act JAG performance measures aligned with programmatic or strategic goals. Our analysis showed that 5 of the 19 measures were linked to Recovery Act goals. For example, DOJ recently included a performance measure for Recovery Act jobs reporting, which is the “number of personnel retained with Recovery Act JAG funds.” The remaining 14 measures lacked a clear linkage to any of DOJ’s goals. For example, 1 of the measures related to the activity type “information systems” is the “percent of departments that completed improvements in information systems for criminal justice.” However, DOJ does not explain how the performance measure for “improvements to information systems for criminal justice” relates or links to agencywide goals. When we asked DOJ officials to describe how the Recovery Act JAG performance measures align with broader departmental goals, they explained that the JAG authorizing legislation guides the states’ use of the funds within the seven general purpose areas for JAG and that they do not link these purpose areas to current year DOJ goals. However, DOJ officials explained that Recovery Act JAG performance measures are linked to the department’s strategic goal 2, “Prevent Crime, Enforce Federal Laws, and Represent the Rights and Interests of the American People,” and strategic goal 3, “Ensure the Fair and Efficient Administration of Justice.” DOJ officials did not provide written documentation or guidance to Recovery Act JAG recipients that explained this linkage to facilitate understanding of how performance measures were being used consistently with DOJ’s strategic and programmatic goals. Further, with the exception of Recovery Act goals, officials from all 14 of the SAAs noted that they did not see a direct linkage between the Recovery Act JAG performance measures and DOJ’s overall agencywide goals. As we have previously reported, successful organizations try to link specific performance goals and measures to the organization’s overall strategic goals and, to the extent possible, have performance goals that will show annual progress toward achieving their long-term strategic goals. In addition, we have previously reported that, without performance measures linked to goals on the results that an organization expects the program to achieve, several consequences can occur: (1) managers may be held accountable for performance that is not mission critical or at odds with the mission, and (2) staff will not have a road map to understand how the measures support overall strategic and operating goals. In our assessment, we determined that 9 out of the 19 measures were objective. We previously reported that to be objective, performance measures should (1) be reasonably free of significant bias; and (2) indicate specifically what is to be observed, in which population or conditions, an d in what time frame. An example of a BJA performance measure that w determined is objective is the measure “amount of Recovery Act JAG funds used to purchase equipment and/or supplies during the re period.” This measure provides a specific time frame in which expenditures for equipment and/or supplies must have occurred and clearly explains that the amount of funds used for purchasing equi and/or supplies is what should be reported. An example of a BJA performance measure that we determined lacks objectivity is the measure the “percent of staff that directly benefit from equipment or supplies purchased by Recovery Act JAG funds, who report a desired change in their job performance.” We determined that this measure lacks objectivity because it does not indicate specifically what is to be observed, in wh d population, and in what time frame, and is not free from opinion an judgment. For example, it requires those reporting to subjectively determine which staff members directly benefit from an equipment or supplies purchase and which staff members do not. It also requires a subjective determination of how the purchase of equipment or suppli affected a desired change in the performance of staff members who directly benefited from the purchase. When we discussed the issue of es objectivity with DOJ they stated that BJA instructs grantees to only rep on BJA funded activities which occurred during the reporting period. However, they conceded that the measures were open to interpretation and that was a weakness, but suggested that that was the best option give the need to have universal measures that apply to a broad range of uses. We do not agree that all the measures we reviewed were defined sufficiently to prevent subjective interpretation. In addition Texas officials expressed concern that DOJ will not be able to obtain useful data from the PMT because of the subjective interpretation involved in responding to certain of the Recovery Act JAG performance measures. For example, Texas officials identified responses to questions, such as the “percent of departments that report desired program quality” or “percent of staff who reported an increase in skills” as illustrative of the kinds of questions that are open to wide interpretation based on the siz e of the law enforcement organization and the classification of individuals within the organization. In our assessment, we determined that 17 out of the 19 measures lacke measurable targets. Among the 17, the absence of measurable targets meant that outside of their original application the award recipients did not have the opportunity to establish in advance what their target level o performance would be to allow for comparisons to actual performance achieved for the reporting period covered. For example, in the measure “Number of overtime hours paid with Recovery Act JAG funds,” BJA did not design the measure to allow award recipients to specify their target number of hours paid prior to receiving funding. DOJ did recognize that the “project objectives,” i.e. the funded activities, should be linked to meaningful and measurable outcomes associated the Recovery Act and the likelihood of achieving such outcomes be assessed. For example, language in the Recovery Act JAG application instructions requires that, where possible and appropriate, an estimate o the number of jobs created and retained be developed. In addition, the Recovery Act JAG application for funds also requires that the narrative include performance measures established by the organization to assess whether grant objectives are being met and a timeline or plan to iden tify when the goals and objectives are completed. However, measurable targets against which to benchm the narrative. ark results are not explicitly required in As noted, two measures did include measurable targets, and as such will facilitate future assessments of whether overall goals and objectives ar e achieved because comparisons can be easily made between projected performance and actual results. For example in these two measures—”the change in the number of individuals arrested in a targeted group by crime type” and “the change in reported crime rates in a community by crime type”—DOJ provides a list of expectations, such as “we expected number of individuals arrested to increase as a result of our efforts” or “we expected number of individuals arrested to decrease as a result of our efforts,” from which the department expects respondents to choose, to facilitate comparison between the actual and expected number of arrests and reported crimes during a particular quarter. State officials had mixed perspectives on the PMT and Recovery Act performance measures, with some critiquing it even as they acknowledged its utility in principle. For example, five SAAs noted that DOJ’s measures were in development and acknowledged the difficulty for DOJ in developing a tool that could be used nationwide for assessing outputs and outcomes across multiple programs. They also were hopeful that the tool would increase uniform program data collection and allow for meaningful comparisons of data and outcomes across states and different jurisdictions. State officials also had positive comments about DOJ’s Help Desk and the staff who provided technical support for the use of the tool. In addition, while eight states were silent on the issue, state officials from our remaining seven states stressed that reporting on the JAG Recovery Act performance measures is time-consuming and duplicative of other existing state performance measurement reporting systems. For examp officials from Colorado, Pennsylvania, and Illinois had concerns about limited staff availability to monitor the workload associated with meeting both Recovery Act and the PMT reporting requirements. Specifically, officials stated that they have to monitor subrecipient activities and provide monthly and quarterly information—as well as validate jobs le, reporting through payroll, expenses, and timesheets—to ensure job cou are calculated accurately and consistently. In other examples, officials from Colorado and Iowa expressed concern that the PMT duplicates their existing state performance measurement systems with similar measures and results in duplication of effort. In addition, the burden of complying with both BJA and state requirement led some states, such as Michigan, Ohio, and Texas, to eliminate some of their state performance systems even though officials told us that they believed that these systems measured performance outcomes better than the PMT performance measures. For example, Michigan state officials explained that their preexisting state quarterly performance reports provided specific data on grant outcomes that were of interest to state legislators and policymakers, and which are not included in the PMT performance measures. In particular, Michigan’s state performance system included measures related to drug courts, such as the number of drug-free babies that are born to participants. Under the Recovery Act, the JAG program made available nearly $2 billion in additional funds for states and local governments, which states and localities reported using primarily for law enforcement activities while also maintaining some programs that would have been eliminated or cut. Although reporting challenges remain with regard to the Recovery Act itself, states and localities took steps to share information about promising practices funded through JAG, and DOJ has measures in place to facilitate such information sharing. In addition, the new performance measures that DOJ has developed capture information on the use of Recovery Act JAG funds. However, while DOJ’s performance measures include attributes of successful measures, further improvements are possible. Because the Recovery Act JAG program supports a wide array of activities, as well as the personnel to implement them, having clear performance measures that allow grant recipients to demonstrate results would provide useful information to DOJ regarding how Recovery Act JAG funds are being used. Our previous work has identified key attributes of successful performance measurement systems that would help assess progress and make performance information useful for key management decisions. According to the sample we reviewed, DOJ’s performance measures do not consistently exhibit key attributes of successful performance measurement systems, such as clarity, reliability, linkage, objectivity, and measurable targets. Measures that are not clearly stated can confuse users and cause managers or other stakeholders to think that performance was better or worse than it actually was. The lack of data reliability can create challenges in ensuring accurate information is recorded for performance purposes. Further, the lack of measurable targets also limits the ability to assess program performance and provides limited information to Congress about the success of the program. Moreover, successful organizations try to link performance goals and measures to the organization’s strategic goals and should have performance goals that will show annual progress toward achieving long-term strategic goals. In addition, by establishing a mechanism to verify accuracy of self-reported data, DOJ can better ensure reliability of information that is reported. By addressing attributes consistent with promising performance measurement practices as it works to revise its performance measures, DOJ could be better positioned to determine whether Recovery Act JAG recipients’ programs are used to support all seven JAG program purposes and are meeting DOJ and Recovery Act program goals. Recognizing that DOJ is already engaged in efforts to refine its Recovery Act JAG performance measures in the PMT, we recommend that the Acting Director of the Bureau of Justice Assistance take the following two actions to better monitor Recovery Act JAG program performance and demonstrate results through use of this instrument: in revising the department’s Recovery Act JAG performance measures consider, as appropriate, key attributes of successful performance measurement systems, such as clarity, reliability, linkage, objectivity, and measurable targets; and develop a mechanism to validate the integrity of Recovery Act JAG recipients’ self-reported performance data. We provided a draft of this report to DOJ for review and comments. DOJ provided written comments on the draft report, which are reproduced in full in Appendix VII. DOJ concurred with the recommendations in the report and stated that BJA plans to take actions that will address both of our recommendations by October 1, 2011. Specifically, in response to our first recommendation that DOJ revise the Recovery Act JAG performance measures to consider, as appropriate, key attributes of successful performance measurement systems, DOJ stated that BJA is taking steps to revise the Recovery Act JAG performance measures—in conjunction with State Administering Agencies—and that it specifically will consider clarity, reliability, linkage, objectivity, and measurable targets in redesigning its performance measures. In response to our second recommendation relating to data quality, DOJ stated that BJA will develop and implement a mechanism to validate the integrity of Recovery Act JAG recipients’ self- reported performance data. DOJ also provided technical comments on a draft of this report, which we incorporated as appropriate. We are sending copies of this report to the Attorney General, selected congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact David Maurer at (202) 512-9627 if you or your staff has any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VIII. This report addresses the following four questions: (1) How are Recovery Act Justice Assistance Grant (JAG) funds awarded and how have recipients in selected states and localities used their awards? (2) What challenges, if any, have Recovery Act JAG recipients reported in complying with Recovery Act reporting requirements? (3) To what extent do states share promising practices related to the use and management of Recovery Act JAG funds, and how, if at all, does the Department of Justice (DOJ) encourage information sharing? (4) To what extent are DOJ’s Recovery Act JAG performance measures consistent with promising practices? As agreed with your office, we focused our review on Recovery Act JAG grants in a nonprobability sample of 14 states. The grants made to these states included both direct awards that DOJ made to State Administering Agencies (SAAs) and localities, as well as pass-through awards SAAs made to localities. A portion of this work was done in conjunction with our other Recovery Act reviews that focused on those 16 states, as well as the District of Columbia that represent the majority of Recovery Act spending. The 16 states included Arizona, California, Colorado, Florida, Georgia, Illinois, Iowa, Massachusetts, Michigan, Mississippi, New Jersey, New York, North Carolina, Ohio, Pennsylvania, and Texas. We selected these states and the District of Columbia on the basis of federal outlay projections, percentage of the U.S. population represented, unemployment rates and changes, and a mix of states’ poverty levels, geographic coverage, and representation of both urban and rural areas. Collectively, these states contain about 65 percent of the U.S. population and are estimated to receive about two-thirds of the intergovernmental assistance available through the Recovery Act. However, for the purposes of this report, we limited our scope to a subset of 14 of these states so as not to duplicate ongoing work in the other 3 (Florida, New Jersey, and the District of Columbia) that the DOJ Office of Inspector General was conducting. The awards to these 14 states accounted for approximately 50 percent of all Recovery Act JAG funds provided. To identify how recipients of direct and pass-through funds received and used their Recovery Act JAG awards in selected states and localities, we conducted in-person and telephone interviews with officials from SAAs in all 14 states as well as officials from a nonprobability sample of 62 localities in these states. Where statements are attributed to state and local officials, we did not analyze state and locality data sources but relied on state and local officials and other state sources for relevant state data and materials. We selected these localities based on the amount of their grant awards, the activities that they were undertaking with grant funds, whether they reported that they had completed 50 percent or more of their grant activities according to their responses provided in Recovery Act reporting, and how they received their funds (either as passed-through funding from their SAA or received awards directly from DOJ—and in some cases as part of disparate jurisdictions.) Our interviews addressed the use and perceived impact of Recovery Act JAG funds, program performance measurement and reporting challenges, and sharing of promising practices. Also, we reviewed DOJ direct award data and SAA pass-through awards in 14 SAAs. We also reviewed Recovery Act quarterly reports from Recovery.gov (4th quarter 2009, 1st quarter 2010, and 2nd quarter 2010) to identify additional information on the use of JAG funds. Based on this information, we assigned the grants to one of the seven JAG general purpose areas. For those where multiple purposes were indicated, they were so identified. In cases where a purpose could not be identified we placed it in the category of “not enough information.” We collected and used these funding data because they are the official source of Recovery Act spending. Based on our limited examination of the data thus far we consider them to be sufficiently reliable for our purposes. Findings from our nonprobability samples cannot be generalized to all states and localities that were recipients of Recovery Act JAG funds; however, our samples provided us with illustrative examples of uses of funds, oversight processes, and reporting issues. To determine the extent to which Recovery Act JAG recipients faced challenges in complying with Recovery Act requirements, we interviewed representatives from the 14 SAAs and 62 localities and asked them about their experience with 1512(c) reporting requirements and Office of Management and Budget (OMB) guidance. In addition, we reviewed our previous reports that discuss Recovery Act recipient reporting issues. To identify how states share promising practice information, and the extent to which DOJ encourages information sharing, we conducted in- person and telephone interviews with representatives from all 14 of the SAAs. We also reviewed DOJ information, interviewed DOJ officials, and consulted reports from the National Criminal Justice Association, the National Governors’ Association, and others that describe their information-sharing activities. To identify the extent to which DOJ’s performance measurement approach is consistent with promising practices to assess progress, we interviewed representatives from the 14 SAAs and 62 localities and asked them about their experience with the Performance Measurement Tool (PMT). We also discussed the PMT’s design and Recovery Act JAG performance measure improvement efforts with DOJ staff. Further, we conducted a review of the performance measures that were required for use under the Recovery Act JAG activities commonly reported to have been undertaken by the grant recipients in our sample. From the 86 Recovery Act JAG performance measures under 10 activity types, we analyzed a nonprobability sample of the 19 performance measures required under 4 of the activity areas (Personnel, Equipment and Supplies, Information Systems for Criminal Justice, and the category Outcomes for all Activity Types). We selected these activity types and measures because they were the ones associated with the largest share of reported Recovery Act JAG expenditures and therefore most often encountered by the grant recipients. We then assessed these measures against a set of key characteristics that we have previously reported as being associated with promising practices and successful performance measures we have identified in our previous work. Some of the 9 key characteristics of successful performance measures are attributes that may be most effectively used when reviewing performance measures individually and some are best used when reviewing a complete set of measures. Since we selected a nonprobability sample of measures that was most closely associated with the majority of expenditures, we focused our analysis most heavily on those attributes that could be applied to individual measures—clarity, reliability, linkage to strategic goals, objectivity, and measurable targets. We did not assess the subset of 19 performance measures for the attributes of governmentwide priorities, core program activities, limited overlap, or balance that are associated with an evaluation of a full set of measures. To evaluate the sample, four analysts independently assessed each of the performance measures against attributes of successful performance measures previously identified by GAO. Those analysts then met to discuss and resolve any differences in the results of their analysis. In conducting this analysis, we analyzed program performance measure information contained in DOJ’s Performance Measurement Tool for American Recovery and Reinvestment Act (Recovery Act - ARRA) and fiscal year 2009 Justice Assistance Grant Programs. We did not do a detailed assessment of DOJ’s methodology for developing the measures, but looked at the issues necessary to assess whether a particular measure met the overall characteristics of a successful performance measure. We also reviewed our previous reports that discuss the importance of performance measurement system attributes and obtained information on the extent to which such systems may impact agencies’ planning. The activity types and number of measures selected are listed in table 7. We conducted this performance audit from January 2010 through October 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following table contains the 19 Performance Measurement Tool (PMT) performance measures that were required for use under the Recovery Act JAG activity types commonly undertaken by the grant recipients in our sample. Department of Justice (DOJ) records indicate that all 14 of the states in our sample have drawn down the vast majority of their Recovery Act Justice Assistance Grant (JAG) awards as of May 2010. Specifically, the amounts drawn down range from less than 53 percent to almost 98 percent. Table 10 shows the amount and percentage of these funds that have been drawn down and expended by State Administering Agencies (SAAs), their subrecipients, and localities. The following table illustrates the types of equipment purchases recipients within our 14 sample states have made using Recovery Act Justice Assistance Grant (JAG) funds. This appendix provides the full printed text of the interactive content in figure 4 on page 22 in the body of the report. Specifically, the following figures describe planned uses of Recovery Act Justice Assistance Grant (JAG) funds by each State Administering Agency (SAA) across our 14 sample states, which are listed in alphabetical order by state name. According to state officials, without Recovery Act funds, the state faced budget cuts and would have had to severely cut or discontinue at least half of the projects previously funded with JAG money. In particular, about $20.8 million in Recovery Act JAG funds supported drug task forces and these drug task forces helped account for seizures of 847,665 grams of cocaine; 49,586 grams of heroin; 206,713 grams of methamphetamine; and 305,082 pounds of marijuana in 2008. Prosecution and courts $11,074,062 Program planning, evaluation and technology improvement Crime victim and witness programs $1,265,348 According to state and local officials, Recovery Act JAG supported local gang and drug reduction efforts, helped prevent human trafficking, facilitated a regional approach to reducing methamphetamine production and distribution, and helped develop communications infrastructure. Prosecution and courts $11,981,362 Crime prevention and education $835,678 Drug treatment and enforcement $44,254,215 Program planning, evaluation, and technology improvement $131,213 Crime victim and witness programs $1,858,242 State officials noted that Recovery Act JAG helped maintain services in corrections, such as support for problem youth and adult offenders and prison treatment programs, that faced cuts given the state’s revenue shortfalls and budget reductions. In addition, local officials stated that Recovery Act JAG helped support jobs and purchase equipment that otherwise would have been eliminated or gone unfunded. Prosecution and courts $1,972,990 Crime prevention and education $1,557,764 Drug treatment and enforcement $2,252,813 Program planning, evaluation, and technology improvement $2,173,632 Crime victim and witness programs $381,322 According to state and local officials, Recovery Act JAG funds helped support jobs, including retaining public safety personnel, and continue delivery of services, such as drug court services, drug prevention, and victims’ assistance. In addition, Savannah Police Department officials noted that Recovery Act JAG funds were used to purchase a fully “patrol-certified” Belgian Malinois breed canine to assist with recovery of stolen items, searching for suspects and missing persons, and tracking narcotics. Prosecution and courts $8,570,732 Crime prevention and education $185,797 Drug treatment and enforcement $233,962 Program planning, evaluation, and technology improvement $1,468,394 Crime victim and witness programs $2,138,127 According to state and local officials, Recovery Act JAG funds helped purchase law enforcement equipment, such as in-car video systems, that would have gone unfunded. Support for other programs and services include, for example, support for overtime wages of law enforcement agents, mentoring programs and drug treatment programs, domestic violence programs, and specialty courts for nonviolent, repeat offenders. Prosecution and courts $8,142,570 Crime prevention and education $5,671,274 Drug treatment and enforcement $452,965 Program planning, evaluation, and technology improvement $4,122,386 Crime victim and witness programs Officials in Boone City, Iowa have used a portion of their Recovery Act JAG award to institute cross-training of some employees in the city’s police and fire department. Under the city’s public safety umbrella philosophy, some employees in the city’s police and fire departments receive training in firefighting, emergency response, and law enforcement. Those who receive this “cross-training” are known as public safety employees and can respond to any type of incident where a police officer or firefighter is needed. Officials said that this type of cross-training has allowed the city to be able to do more with limited resources. Crime prevention and education $464,214 Drug treatment and enforcement $7,540,845 Program planning, evaluation, and technology improvement $36,296 Crime victim and witness programs According to local officials, Recovery Act JAG funds helped supplement current state public safety programs, retain jobs, and support core services, including supporting local police departments through funding officer and crime analyst salaries in localities adversely affected by local budget conditions. Crime prevention and education $3,100,000 Program planning, evaluation, and technology improvement $599,672 Crime victim and witness programs The Ottawa County Police Department used their Recovery Act JAG funds to purchase equipment for law enforcement purposes. The department purchased a 20-foot patrol boat, a fingerprint and jail mug-shot system, and global positioning satellite (GPS) tracker devices. The patrol boat replaces a nearly 20-year-old boat in need of major maintenance. The fingerprint and jail mug-shot system improves efficiency by enabling the department to identify potential suspects with the state’s criminal databases. The GPS tracker devices have helped the department in retrieving numerous stolen items and have provided evidence useful in the prosecution of defendants. Prosecution and courts $14,270,111 Crime prevention and education $1,067,558 Program planning, evaluation and technology improvement $1,511,762 Crime victim and witness programs According to state and local officials, Recovery Act JAG funds helped support jobs to manage the state JAG program, and supported local police departments by filling positions, retaining other positions, and funding overtime to provide increased patrols and surveillance. JAG funds will support a variety of programs including multijurisdictional task forces, victim witness assistance, juvenile justice, drug courts, family violence, and increased law enforcement training. Recovery Act JAG funds were also used to purchase law enforcement equipment including crime lab equipment, computers, police cruisers, and integrated software for patrol car laptops. Prosecution and courts $825,000 Crime prevention and education $200,000 Drug treatment and enforcement $2,625,320 Program planning, evaluation, and technology improvement $2,619,462 Crime victim and witness programs According to state and local officials, Recovery Act JAG funds supported the implementa- tion of recent drug law reform, including helping assistant district attorneys in reducing the number of prison commitments, and continue recidivism pilot programs. New York City officials estimate that JAG funds enabled New York City to retain 158 jobs that would otherwise have been eliminated due to budget cuts, and helped create 51 new jobs. Prosecution and courts $9,586,534 Drug treatment and enforcement $16,740,000 Program planning, evaluation, and technology improvement $2,100,000 Crime victim and witness programs The Rutherford County Sheriff’s Department used its share of Recovery Act JAG funds to purchase a tactical vehicle for their officers when responding to volatile situations. The vehicle replaces an old 1986 Ford van that subjected officers to unnecessary risk and can accommodate a team of up to 16 officers as well as store equipment, such as weapons and bullet-resistant vests. The department also purchased portable surveillance equipment that can be thrown or rolled into a room and can provide a 360-degree view to enable officers to identify any potential threats before entering a risky environment. Prosecution and courts $577,951 Crime prevention and education $4,035,331 Program planning, evaluation, and technology improvement $22,242,265 Crime victim and witness programs According to state and local officials, without Recovery Act JAG funds, law enforcement agencies would have faced massive layoffs. Additional funds were also used to support the purchase of law enforcement equipment such as a license plate reader. Prosecution and courts $2,805,401 Crime prevention and education $4,593,430 Drug treatment and enforcement $934,406 Program planning, evaluation, and technology improvement $3,590,904 Crime victim and witness programs $2,676,585 State and local officials noted that Recovery Act JAG funds supported regional antidrug task forces, juvenile programs, and initiatives such as records management improvement, prisoner re-entry programs, and at-risk youth employment programs. Prosecution and courts $3,626,239 Crime prevention and education $5,522,163 Program planning, evaluation and technology improvement $4,838,141 Crime victim and witness programs $3,930,520 According to state and local officials, Recovery Act JAG funds largely helped support equipment purchases and technology improvements, as well as support law enforcement personnel, especially police officer overtime. In addition to the contact named above, Joy Gambino, Assistant Director, managed this assignment. Dorian Dunbar, George Erhart, Richard Winsor, and Yee Wong made significant contributions to the work. Geoffrey Hamilton provided significant legal support and analysis. Elizabeth Curda and Cindy Gilbert provided significant assistance with design and methodology. Adam Vogt and Linda Miller provided assistance in report preparation, and Tina Cheng made contributions to the graphics presented in the report.
Under the American Recovery and Reinvestment Act of 2009 (Recovery Act), the U.S. Department of Justice's (DOJ) Bureau of Justice Assistance (BJA) awarded nearly $2 billion in 4-year Edward Byrne Memorial Justice Assistance Grant (JAG) funds to state and local governments for criminal justice activities. As requested, GAO examined: (1) how Recovery Act JAG funds are awarded and how recipients in selected states and localities used their awards; (2) challenges, if any, selected recipients reported in complying with Recovery Act reporting requirements; (3) the extent to which states shared promising practices related to use and management of funds, and how, if at all, DOJ encouraged information sharing; and (4) the extent to which DOJ's JAG Recovery Act performance measures were consistent with promising practices. GAO analyzed recipient spending and performance data submitted as of June 30, 2010; interviewed officials in a nonprobability sample of 14 states and 62 localities selected based on the amount of their awards, planned activities, and their reported project status; assessed 19 JAG performance measures against a set of key attributes; and interviewed agency officials. Recipients of Recovery Act JAG funding in the 14 states GAO reviewed received more than $1 billion either through direct allocations from DOJ or through an indirect "pass-through" of funds that states originally received from the department. These recipients reported using their funds for a variety of purposes, though predominantly for law enforcement and corrections, which included equipment purchases or the hiring or retaining of personnel. More than half of the funding that state administering agencies (SAA) passed-through to localities was reported to be specifically for law enforcement and corrections activities, while localities receiving direct awards more often reported planning to use their funds for multiple types of criminal justice activities. Officials in all 14 states and 19 percent of localities in GAO's sample (12 of 62) said that without Recovery Act JAG funding, support for certain ongoing local law enforcement programs or activities would have been eliminated or cut. Overall, about $270 million or 26 percent of Recovery Act JAG funds had been reported as expended as of June 30, 2010, but the expenditure rates of funds awarded through SAAs showed considerable variation, ranging from 5 to 41 percent of SAA's total awards. State officials cited challenges in meeting quarterly Recovery Act reporting time frames. Officials from the majority of states in GAO's sample said that workload demands and personnel shortages made meeting Recovery Act deadlines within the prescribed reporting period difficult; however, all states reported that they were able to do so. States reported sharing information and promising practices related to JAG activities in a variety of ways and DOJ encouraged this sharing through a number of programs. More than half of state agencies in GAO's sample generally reported sharing promising practices or lessons learned on topics, such as grant management and administration, with other states and localities through participating in law enforcement and government association conferences, DOJ training, and Web postings, among other methods. DOJ established new performance measures to assess the Recovery Act JAG program and is working to refine them; however, these measures lack key attributes of successful performance assessment systems that GAO has previously identified, such as clarity, reliability, a linkage to strategic or programmatic goals, and objectivity and measurability of targets. Including such attributes could facilitate accountability and management's ability to meaningfully assess and monitor Recovery Act JAG's results. DOJ officials acknowledge that weaknesses exist and they plan to improve their performance measures. For example, the department already took initial steps to incorporate feedback from some states with regard to clarifying the definitions of some performance measures; however, its assessment tool lacks a process to verify the accuracy of the data that recipients self-report to gauge their progress. By including attributes consistent with promising practices in its performance measures, DOJ could be better positioned to determine whether Recovery Act JAG recipients' programs are meeting DOJ and Recovery Act goals. In addition, by establishing a mechanism to verify the accuracy of recipient reports, DOJ can better ensure the reliability of the information that recipients provide. GAO recommends that DOJ (1) continue to revise Recovery Act JAG performance measures and consider, as appropriate, including key attributes of successful performance measurement systems, and (2) develop a mechanism to validate the integrity of self-reported performance data. DOJ concurred with these recommendations.
In the U.S., while BRT projects vary in design, they generally include service enhancements designed to attract riders and provide similar transit-related benefits to rail transit. Specifically, as shown in figure 1, BRT generally includes improvements to seven features–running ways, stations, vehicles, intelligent transportation systems, fare collection, branding, and service. These enhancements are designed to replicate features found in rail transit and provide similar benefits including increases in ridership, travel time savings, and contribution to economic development. While few existing studies have examined the link between BRT and economic development, numerous studies have investigated the link between rail We have previously reported that, transit and economic development. overall, these studies have shown that the presence of rail transit tends to positively impact surrounding land and housing values. However, in some cases the increases are modest and the impact throughout an entire system can vary depending on several characteristics. For instance, retail development, higher relative incomes, and proximity to job centers, parks, or other neighborhood amenities tend to increase land and housing values near transit, while non-transit oriented land uses, crime, and poor economic environments around a transit station can limit increases or even be a negative influence. In the U.S., multiple federal-funding sources have supported BRT systems. FTA’s Capital Investment Grant program provides capital funds to help project sponsors build larger-dollar new or extensions to existing fixed guideway transit capital systems—often referred to as “New Starts projects.” In 2005, SAFETEA-LU established the Small Starts program within the Capital Investment Grant program; the Small Starts program simplifies the New Starts evaluation and rating criteria and steps in the project development process to lower cost projects. It also added corridor-based bus systems as eligible projects. According to FTA’s guidance, BRT projects must (1) meet the definition of a fixed-guideway for at least 50 percent of the project length in the peak period or (2) be a corridor-based bus project with certain elements to qualify as a Small Starts project. FTA subsequently introduced a further streamlined evaluation and rating process for very low cost projects within the Small Starts program, which FTA calls Very Small Starts. Very Small Starts are projects that must contain the same elements as Small Starts projects and also contain the following three features: be located in corridors with more than 3,000 existing transit riders per average weekday who will benefit from the proposed project; have a total capital cost of less than $50 million (for all project elements); and have a per-mile cost of less than $3 million, excluding rolling stock (e.g., buses and train cars). Any transit project that fits the broader definition of a fixed-guideway system is eligible, whether it is a BRT, streetcar, or other rail transit project (e.g., commuter rail, heavy rail, and light rail). BRT projects are also eligible for federal funding from other sources such as Congestion Mitigation and Air Quality Improvement grants, the Urbanized Area Formula grants, and the U.S. Department of Transportation’s Transportation Investment Generating Economic Recovery discretionary grants (TIGER). Based on our questionnaire results, we found that many U.S. BRT projects incorporate at least some station amenities and most other BRT features that distinguish them from standard bus service, and improve riders’ transit experience. However, few BRT project sponsors reported the use of dedicated or semi-dedicated running ways for at least 30 percent of the route and less than half use off-board fare collection infrastructure (see Table 1 for an overview of BRT projects’ physical features). Our questionnaire results indicate that most BRT projects (16 of 20) operate in mixed traffic—primarily arterial streets—for 50 percent or more of their routes. In contrast, 5 of the 20 BRT projects travel along a dedicated or semi-dedicated running way for 30 percent or more of their routes. According to FTA research, BRT projects with more fully dedicated running ways generally experience the greatest travel time savings as compared to the corridors’ local bus route. (See below for other BRT features that affect travel time savings.) However, our analysis of questionnaire data did not show a correlation between the type of running ways BRT projects operate on and travel time savings. For example, Cleveland’s Healthline and the M15 in New York City operate along fully or semi-dedicated running ways for at least 60 percent of their routes, but these projects did not achieve the same percentage gains in travel time savings as projects such as Kansas City’s Troost MAX or Mountain Links in Arizona, both of which run in mixed traffic for at least 75 percent of their routes. Some of the difference between our results and those of previous research may be attributable to the relative lack of congestion in some of the BRT corridors, which helps these projects generate travel time savings while running in mixed traffic. For instance, the Troost MAX reported the highest travel time savings of any project, yet it runs almost entirely in mixed traffic along a corridor with minimal traffic congestion. In contrast, previous BRT research often includes international and other U.S. BRTs, such as the TransMilenio in Bogota, Columbia, and the East Busway in Pittsburgh, Pennsylvania, that have used dedicated running ways to achieve significant travel time savings because of the cities’ congestion levels. According to FTA research, station amenities can help shape the identity of a BRT project by portraying a premium service and enhancing the local environment. Based on responses to our questionnaire, most BRT projects (12 of 20) have at least four station amenities present at half or more of their stations, while four projects include at least seven amenities. The most common station amenities reported by BRT project sponsors included seating, weather protection, level boarding, and route maps and schedules. (See fig. 2.) Cleveland’s Healthline and Eugene’s Franklin and Gateway EmX incorporate the most station amenities. However, U.S. BRT projects generally do not include stations of the size and scale of those found in Latin American BRT systems such as Curitiba, Brazil; Bogota, Columbia; or Mexico City, Mexico. Through our site visits we found that BRT stations providing relatively few amenities may still be enhanced compared to standard bus stops in the same area. For example, in Los Angeles, standard bus stops are designated by a single flagged pole with limited route information, whereas all Metro Rapid stations provide detailed route information and many will have weather protection and safety improvements, such as (See fig. 3.) Likewise, Kansas City Area Transportation lighting.Authority (ATA) officials informed us that Troost MAX stops were designed significantly larger and with more rail-like features than traditional bus stops. BRT projects have different combinations of fare collection and verification methods. According to our questionnaire results, most BRT projects (14 of 20) allow on-board driver validation—typical of standard bus service—as a fare collection option for riders. Fewer projects incorporate alternative fare collection methods, such as proof-of-payment systems that allow riders to board without presenting payment directly to a driver, or off-board fare collection infrastructure (i.e., fare card vending machines or barrier systems). Specifically, half of the project sponsors (10 of 20) reported that their projects use a proof-of-payment system and seven reported that their projects incorporate off-board fare collection infrastructure. According to FTA research, off-board fare collection infrastructure may contribute to customers’ perception of BRT as a high- quality transit service and can improve service reliability and travel time savings. Project sponsors also mentioned this feature as important in generating travel time savings. With respect to BRT vehicle features, according to our questionnaire results, all project sponsors reported the use of low floor vehicles and nearly all reported the use of lower emissions vehicles, technology for expedited wheelchair boarding, security cameras, and audio stop announcements. (See fig. 4.) According to FTA research, the design and features of BRT vehicles can affect the projects’ ridership capacity, environmental friendliness, and passengers’ comfort and overall impression of BRT. Greater Cleveland Regional Transit Authority (RTA) officials told us that the transit agency went through several iterations with the manufacturer to design a BRT vehicle that looked and felt more like a rail car. Among other features, the Healthline vehicles were designed to include hybrid technology—which according to local officials provides a quieter ride than standard buses—doors on both sides, and expedited wheelchair-boarding capabilities to reduce passenger-loading times. All BRT project sponsors responding to our questionnaire have used some form of branding and marketing to promote their BRT service, such as website improvements specific to BRT and uniquely branded BRT vehicles and stations. Research on BRT, as well as project sponsors and other experts we spoke with, emphasized the importance of strong branding and marketing in shaping the identity of a line or system and attracting riders. Los Angeles Metro officials told us that they employed a number of additional marketing techniques to increase awareness of the BRT service before it opened, such as hosting big media events and ambassador programs in which Metro staff handed out brochures at bus stops. To create a brand name and generate revenue, Cleveland’s RTA sold the naming rights of its BRT project and select stations for $10 million, over 25 years. According to responses to our questionnaire, 9 BRT projects have at least 3 of the 6 Intelligent Transportation Systems (ITS) features and almost all (18 of 20) incorporate at least one feature. The most common ITS technologies included as part of BRT projects were transit signal priority systems (18 of 20), and vehicle tracking systems (17 of 20), which monitor vehicles to ensure arrivals are evenly spaced and transit connections are on schedule. (See fig. 5 for an example.) Research by FTA and others has found that incorporating ITS into BRT projects can help transit agencies increase safety, operational efficiency, and quality of service. In addition, these systems can improve riders’ access to reliable and timely information. Los Angeles Metro officials told us that traffic signal priority represents one of Metro Rapid’s most important attributes. These officials informed us that while the system does not override traffic lights, it can extend green signals to get BRT vehicles through the lights and to the next stop, helping keep the vehicles on time. While less common, some BRT projects use queue jump lanes, a feature that generally involves BRT vehicles traveling in restricted lanes and receiving early green light signals at select intersections. According to officials of Eugene’s Lane Transit District (LTD), the use of a queue jump lane has helped generate travel time savings for EmX riders by allowing the BRT vehicles to by-pass traffic stopped at an intersection. Based on our interviews with BRT project sponsors and planners, several factors influenced the design of BRT projects and the presence or absence of physical features commonly associated with BRT. In particular, stakeholders frequently mentioned cost considerations, community needs and input, and the ability to phase in additional physical features over time as factors influencing their decisions. Officials in four of our five site-visit locations described instances in which costs or financial constraints factored into their decision-making or resulted in a change of plans regarding the project’s physical features. For example, Kansas City ATA officials told us that a dedicated running way was not acquired for the Troost MAX in part because this feature would have added costs without providing substantial travel time savings benefits given Troost Avenue’s minimal traffic congestion. In Seattle, King County Metro officials told us that several common BRT features, including level or raised boarding and off-board ticket or fare card vending machines, were not incorporated into the RapidRide system because of costs. For instance, they explained that level or raised boarding was not included because of the costs associated with implementing this feature at a large number of stations and stops (120 and 155 respectively) and addressing the limitations of the different sites. Three projects we visited during site visits were Very Small Starts projects and therefore, had total project capital costs of less than $50 million. (See app. I for the list of our case study projects.) The sponsors of two of these projects told us that while Very Small Starts projects can create incentives for communities to pursue BRT by offering streamlined requirements and grants for up to 80 percent of a project’s total capital cost, the program’s $50-million limit on projects’ total capital costs provides an incentive to keep costs low. As a result, project sponsors may only incorporate those physical features that are the most cost-effective or critical to achieving the projects’ objectives and omit other features commonly associated with BRT. Several project sponsors we visited also mentioned that the input of community residents, business owners, and other stakeholders affected by a project can help shape final decisions about its design and features, for instance: Los Angeles city officials explained that only 80 percent of the Wilshire Metro Rapid route within the city limits will have bus-only lanes during weekday peak hours because some neighborhoods resisted bus-only lanes and were unwilling to give up a travel lane on such a congested street. Officials in Eugene told us that the Franklin Avenue EmX was originally intended to run on a dedicated running way for 90 percent of its route. However, in part due to the public input process, which raised concerns over loss of parking and business access, the agency reduced the dedicated portion of the route to 50 percent. Kansas City ATA officials explained that residents’ safety concerns along Troost Avenue resulted in well-lighted shelters designed with transparent backings and real-time information displays, which helped increase passengers’ sense of safety while waiting for the bus during the evening. Several major stations were also equipped with security cameras. Some transit experts we spoke to also pointed out that some BRT features may not be incorporated into a project’s initial design, since— unlike rail transit projects—it is fairly easy to add features to BRT projects after they start operating. Moreover, project sponsors in four of the five site-visit locations told us that they plan to incorporate (or are considering incorporating) additional features into their BRT projects. According to local officials, Eugene’s transit agency may increase the portion of the EmX line that runs on a designated running way, particularly through sections of neighboring Springfield that are planned for redevelopment. These officials noted that stakeholders generally view the EmX’s implementation as an incremental process and its flexibility as an important benefit. In Seattle, transit agency staff explained that although level boarding and off-board fare card vending machines were not incorporated into the initial design of the RapidRide lines, these features will be periodically reevaluated for future lines and off-board fare card vending machines may be added to some locations on existing lines. For systems where changes in ridership could be calculated, almost all BRT project sponsors (13 of 15), reported increased ridership over the previous transit service—typically a standard bus service—according to results from our questionnaires (see fig. 6.) Of the 13 existing BRT projects that increased ridership, more than half (7 of 13) reported increases of 30 percent or more during the first year of service. Three of the eight BRT project sponsors who reported ridership data for additional years continued to increase ridership. For example, ridership for the RTC Rapid in Nevada increased at least 5 percent each year for the first 3 years of service. BRT project sponsors stated that they attracted riders, in part, by reducing travel times and incorporating BRT features. All BRT projects that replaced existing transit service reported travel time savings during peak hours ranging from about 10 percent to 35 percent, as shown in figure 7. Several BRT project sponsors highlighted BRT features that helped reduce travel times and attract riders. New York City Transit reported an average travel time savings of 13 minutes (or 16 percent), from 81 to 68 minutes for the M15 BRT (an 8.5 mile route). Analysis done by New York City Transit and others showed that the travel time savings for riders was due to shorter waiting times from the off-board fare collection. Similarly, Eugene LTD officials told us that one of the ways they attracted riders was to reduce travel times for the EmX BRT using two ITS components–-transit signal priority and a queue jump. According to research and transit stakeholders we spoke to, travel time savings is one of the greatest contributors to ridership gains. In addition to decreased travel times, BRT project sponsors also improved ridership by shortening “headways”—the time interval between buses moving in the same direction on a particular route—and decreasing riders’ wait times. More than half of BRT project sponsors (13 of 20) reported having headways of 10 minutes or less during peak hours. Furthermore, during off-peak hours, over half of these existing BRT systems (11 of 20) operated headways of 15 minutes or less. Local officials told us that the EmX’s 10-minute headways—5 minutes shorter than the previous bus route—improved ridership by university students and made it easier for them to live further from campus where rents are less expensive. Moreover, according to FTA guidance and other research, frequent headways are important for riders’ perception of service quality. Specifically, research suggests that during peak hours 10 minutes is the maximum time between vehicles that riders are willing to wait without planning ahead of time. BRT project sponsors also reported providing service enhancements to attract riders and, in some cases, reduce travel times. Service enhancements included extended hours of service (e.g., more than 16 hours per day), weekend service, and limited-stop service. All project sponsors reported providing at least one service enhancement and almost half (8 of 20) reported offering all three expanded service characteristics in our questionnaire. Project sponsors highlighted how the service enhancements helped reduce travel times. For example, Kansas City ATA officials attributed part of the Troost BRT’s travel time savings to greater spacing between stops which allowed the vehicles to stop less frequently and travel at higher speeds. Gains in ridership are due in part to the BRT’s ability to attract new riders to transit. All five BRT project sponsors we spoke with attributed a portion of the gains in ridership to an increase in choice riders—those who prefer to use transit even though they have the option to drive. Cleveland RTA’s Healthline BRT, for example, replaced the busiest bus route in the city and surpassed its 5-year ridership projection in the second year of service. Specifically, according to Cleveland RTA officials, some riders are using the Healthline for mid-day trips that they may have previously taken in cars. Similarly, the Seattle’s RapidRide A line also replaced one of the busiest bus routes and achieved an increase in ridership of more than 30 percent in the first year, an increase that included new riders from the local community college, according to King County Metro officials. Research suggests that at least some of these choice riders would be unwilling to ride a traditional bus, but will ride BRT. Even with gains in ridership, BRT projects in the U.S. usually carry fewer total riders compared to rail transit projects, based on our analysis of project sponsor questionnaires. The rail transit projects we examined generally had higher average weekday ridership than BRT lines, although there were some exceptions. As figure 8 shows, nine of the 10 projects with the highest total ridership are rail transit projects. However, the M15 BRT in New York City has the highest total ridership of any project—more than 55,000 riders per day. This illustrates how, given the right conditions, BRT projects can generate ridership similar to rail transit. In addition, three other BRT projects—Cleveland’s Healthline, Los Angeles’ Metro Rapid 733, and Southern Nevada’s BHX—average over 10,000 weekday riders, more than light rail projects in Los Angeles, Salt Lake City, and San Diego. Several factors, including the number of available riders and rider preferences, affect total ridership. The M15’s high ridership is in part due to its location in densely populated Manhattan, the high number of transit- dependent riders living and working along the corridor, and the distance to the nearest subway line. In comparison, two commuter rail lines we examined were among the five projects with the lowest number of average daily riders likely due to shorter hours of service and the fact that, with the exception of a few peak hours, commuter rail lines generally have fewer trips throughout the day. Further, we heard from stakeholders that, in general, riders prefer rail transit compared to bus due to the greater perceived prestige of rail transit. Rail transit project sponsors and city officials for all rail projects we looked at told us that their projects would likely not have attracted the same number of riders had they been developed as BRT, citing the perception some riders have about the quality and permanence of bus service. According to project sponsors, rail transit projects have the ability to attract riders who would not be interested in any form of bus given perception and features. Research suggests that many intangible factors, including perception, play a role in making rail transit more attractive than bus. However, as discussed earlier, BRT project sponsors told us that the perceptions about bus for “choice riders” can be overcome with rail-like features. Cleveland RTA officials attribute increased BRT ridership to more professionals and students riding the Healthline. According to these officials, professionals and students find the Healthline attractive because of the increased frequency of service; quicker travel times; enhanced safety; limited stops; quality of ride; and quieter, more attractive, and more fuel-efficient vehicles. In some international cities, however, given their more comprehensive systems, higher population densities, and more positive attitudes about bus service, BRT ridership in some cities exceeds rail transit ridership in the U.S. Of the planned or completed New, Small, or Very Small Starts projects that received construction grant agreements under FTA’s Capital Investment Grant program from fiscal year 2005 through February 2012, BRT projects generally had lower capital costs than rail transit projects. Median costs for BRT and rail transit projects we examined were about $36.1 million and $575.7 million, respectively. Capital costs for BRT and rail transit projects ranged from about $3.5 million to over $567 million and almost $117 million to over $7 billion, respectively. Of the 30 BRT projects with a grant agreement, only five had higher capital costs than the least expensive rail transit project. While initial capital costs are generally lower for BRT than rail transit, capital costs can be considered in context of total riders, as discussed earlier, and other long-term considerations, which we discuss below, depending on the purpose of the analysis. Figure 9 shows the range and individual project capital costs by mode. More than half of projects (30 of 55) that received grant agreements since fiscal year 2005 have been BRT projects, yet these projects account for less than 10 percent of committed funding, as shown in figure 10. Based on our analysis of project cost estimates, we estimate $12.8 billion of Capital Investment Grant funds committed for New, Small, and Very Small Starts will be used for transit projects that received grant agreements since fiscal year 2005. Of this $12.8 billion, $1.2 billion will be for BRT projects. The amount of New Starts, Small Starts, and Very Small Start projects’ funding committed for BRT projects ranged from almost $3 million to $275 million. Rail transit projects accounted for less than half of projects with grant agreements (25 of 55) and more than 90 percent of funding. Federal Capital Investment Grant contributions under the New Starts, Small Starts, or Very Small Start categories for rail transit projects ranged from almost $60 million to over $2 billion. Since fiscal year 2005, most projects with grant agreements under Small Starts and Very Small Starts have been BRT projects while most New Starts projects have been rail transit. With two exceptions, all 30 BRT projects funded since fiscal year 2005 were funded under Small Starts or Very Small Starts. Twenty-one of 25 rail-transit projects were funded under New Starts and the remaining were funded under Small Starts. (See fig. 11.) We heard from all of the BRT project sponsors we spoke with that, even at a lower capital cost, BRT could provide rail-like benefits. For example, Cleveland RTA officials told us the Healthline BRT project cost roughly one-third of what a comparable light rail project would have cost them. Similarly, Eugene LTD officials told us that the agency pursued BRT when it became apparent that light rail was unaffordable and that an LTD light rail project would not be competitive in the New Starts federal grant process. The difference in capital costs between BRT and rail transit is due in part to elements needed for rail transit that are not required for BRT projects. Light rail systems, for example, often require train signal communications, electrical power systems with overhead wires to power trains, and rails, ties, and switches. Further, if a rail maintenance facility does not exist, one must be built and equipped. On the other hand, transit experts who have evaluated both rail transit and BRT told us that while initial capital costs are higher for rail transit than for BRT, life-cycle capital costs for rail transit are potentially lower than BRT. For instance, although more expensive up front (typically $1.5 million to $3.4 million per car), life cycles of rail transit cars are longer (typically 25 years or more) than most BRT vehicles (12 to 15 years). However circumstances affecting costs will vary among projects, and research has not yet been done to compare life-cycle costs of BRT systems in the U.S., as they are still relatively new. BRT capital costs depend on each project’s features and service levels. Specifically, costs are affected by:  Type of running way. As mentioned above, most BRT projects we reviewed run in mixed traffic rather than dedicated or semi-dedicated running ways. According to research, capital costs for BRT projects that operate in mixed traffic range from $50,000 to $100,000 per mile compared to $2 to $10 million per mile for projects that have dedicated lanes.  Right-of-way or property acquisition. Many BRT projects use running ways and stations areas in existing streets and sidewalk space. However, BRT projects designed with rail transit-like dedicated right-of-ways could require more property acquisition or leasing to make room for guideways, stations, or other infrastructure.  Type of vehicles and services selected. Capital costs for BRT vehicles can range from about $400,000 to almost $1 million. The number of BRT vehicles needed for a route can depend on the length of the project, travel time, and peak headway, among other things. For example, Cleveland RTA spent about $21 million dollars for vehicles on the Healthline compared to Kansas City ATA which spent about $6.3 million for vehicles on the Troost MAX BRT. Differences in price were a result of (1) Cleveland’s needing nine more vehicles than Kansas City (24 compared to 15 respectively) to maintain shorter headways and (2) the cost of the vehicles ($900,000 compared to $366,000 respectively). Cleveland’s vehicles have more features, including hybrid technology for a quieter ride, multiple boarding doors to expedite boarding, and articulated vehicles to increase capacity. Non-transit related features. Some projects’ costs include streetscaping, landscaping, or updates to utilities, while others do not. For example, three of the five project sponsors we met with used federal funding to purchase artwork along the line to increase a sense of permanence and better incorporate the BRT system into the community. (See next section for a discussion of the role of permanence in economic development.) As with capital costs, a project’s total operating costs can vary based on several project factors, including length of the route, headways, vehicle acquisition, and other non-transit related features. As a result of the many factors involved, it can be challenging to generalize differences in operating costs within and across modes. In some cases BRT projects have lower operating costs than the previous bus service. For example, according to Eugene LTD officials, the Eugene EmX decreased overall operating costs per rider. Officials attributed the savings to improved schedule reliability and travel-time savings from the dedicated right-of- way, which reduced labor costs because fewer buses are needed to maintain the schedule. Cleveland RTA told us the Healthline BRT reduced the overall operating budget and the average costs per rider decreased. For RTA, the 18 vehicles that operate during peak hours replaced the 28 buses that were needed to operate the standard bus service the BRT replaced. Hourly labor costs are about the same for BRT, standard bus service, and heavy rail; however, the cost per rider is lower for the BRT than standard buses due to higher capacities and ridership on the BRT. We also heard from stakeholders and project sponsors that operating costs for BRT and rail transit depend strongly on the density and ridership in the corridor. For example, according to one transit expert, while signaling and control costs are high for rail transit, there is a tipping point where given a high enough density and ridership, rail transit begins to have lower operating costs overall. New York City Transit officials commented that while construction costs for a street-running BRT are about 1/500th of the cost of building a heavy rail, operating costs for a bus operation can be higher. Two operators can carry close to 2,000 riders on a single heavy rail train, whereas in a BRT system, 24 operators are needed to carry the same number of riders. In general, we found that project sponsors and other stakeholders in each of our five case study locations believe that the BRT project is having some positive effect on economic development. However, these individuals were unsure about how much of the economic activity can be attributed to the presence of BRT versus other factors or circumstances (See table 2 for a summary of economic development activities near the In addition, stakeholders mentioned that five BRT projects we visited).the recent recession limited the number of development projects to date, but they expect increased economic development in the future along select areas of the BRT corridors as economic conditions improve. Project sponsors, local officials, and transit experts we spoke to believe that, in general, rail transit is a better economic development catalyst than BRT; however, this opinion was not universal. For example, Cleveland officials told us that they do not believe that economic development along Euclid Avenue would have been any different if a light rail line had been built in the corridor instead of a BRT. In addition, stakeholders mentioned that certain factors can enhance BRT’s ability to generate economic development similar to rail transit. Specifically, they described how economic development near BRT can be supported by having: physical BRT features that convey a sense of permanence to developers; major institutional, employment, and activity centers along or near the BRT corridor that can sponsor development projects; and transit-supportive local policies and development incentives. A number of project sponsors, local officials, and other stakeholders we spoke to emphasized the importance of BRT projects’ physical features— particularly those that are perceived as permanent—in helping to spur economic development. They explained that BRTs with dedicated running ways, substantial stations with enhanced amenities, and other fixed assets represent a larger investment in the corridor by the public sector and assure developers that the transit service and infrastructure will be maintained for decades into the future. For example, Los Angeles local officials told us that the city’s Orange Line BRT can come close to light rail in terms of economic development because its station infrastructure and enhanced amenities relay a sense of permanence to developers. The results of our land value analysis of BRT corridors also is consistent with the perception that the permanence of BRT features may play a role in spurring development and increasing land values.University Circle portion of the Healthline, which received significant infrastructure and private institutional investments (i.e., investments that are more likely to be perceived as permanent by developers and others), experienced modest to large increases in land values. In contrast, the East Cleveland segment of the Healthline—which includes fewer BRT features and less investment than other segments of the line— experienced a slight decline in land values in the years immediately before and after BRT operations began. (See fig. 12) Although BRT has become more common in the U.S. in recent years, it remains an evolving and diverse concept. BRT projects encompass a range of designs and physical features and provide varying levels of service, economic development, and other benefits to communities. The flexibility of BRT has allowed cities and regions across the country—with differing public transportation needs and goals—to improve transit service and demonstrate investment in surrounding communities, often at a lower initial capital cost than with rail transit. However, cost differences between U.S. BRT projects and rail transit projects are sensitive to individual project features and each transit agencies’ unique circumstances. Differences in cost partly reflect BRT project sponsors’ limited use of the more costly features commonly associated with BRT—such as dedicated running ways, stations with major infrastructure investments, and off- board fare collection. Cleveland’s Healthline incorporates the most BRT features of any project we examined and cost $200 million to construct, which is comparable to some of the less costly rail transit projects. Some of the more costly BRT features are the same features stakeholders view as critical to contribute to economic development because they portray a sense of permanence to developers and demonstrate investment by the public sector. Therefore, project sponsors in cities with limited transit funding sources and without major congestion issues may find the added cost of these features worthwhile only if economic development is among their projects’ primary objectives. The limited use of BRT’s more costly features might also partly reflect the relatively large role that the Small and Very Small Starts programs have played in funding recent BRT projects as compared to state and local funding sources. The funding these programs provide to smaller transit projects has allowed communities that otherwise may not have been as competitive in the New Starts process to obtain federal transit support. However, it is possible that limits on the total project cost create incentives for BRT project sponsors to omit more costly BRT features. In general, though, it appears that BRT project sponsors are using the Small and Very Small Starts programs to design and implement projects that address their communities’ current transit needs and align with the projects sponsors’ overall objectives. Moreover, project sponsors may develop initial plans for BRTs that do not include a comprehensive range of features, knowing that they can incorporate additional features into BRT projects incrementally as communities’ transit needs and financial circumstances change. We provided U.S. Department of Transportation (DOT) with a draft of this report for review and comment. U.S. DOT did not comment on the draft report. We are sending copies of this report to interested congressional committees and the Secretary of the Department of Transportation. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions or would like to discuss this work, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals making key contributions to this report are listed in appendix III. GAO selected five bus rapid transit projects in cities across the U.S. to serve as case studies for this report. This appendix lists these five projects and provides links to the projects’ websites. See Table 3 below. To examine the features, costs, and community benefits of Bus Rapid Transit (BRT) projects recommended for funding by the Federal Transit Administration (FTA), we addressed the following four questions: 1. Which BRT features are included in BRT projects and why? 2. How have BRT projects performed in terms of ridership and service and how do they compare to rail transit projects? 3. How do the costs of these projects differ from rail transit projects? 4. To what extent do BRT projects provide economic development and other benefits to communities? the Healthline in Cleveland, Ohio; the RapidRide A Line in Seattle, Washington; the Troost MAX in Kansas City, Missouri; the Metro Rapid System in Los Angeles, California; and the Franklin EmX in Eugene, Oregon. We selected site visit locations based on consideration of several factors, including the number and extent of BRT features; ridership, length of route, peak headway, and geographic diversity. We considered all 20 existing BRT projects that received federal funding and selected projects with a range of each factor listed above. Because we selected a nonprobability sample of projects, the information we obtained from these interviews and visits cannot be generalized to all BRT projects. To assess how BRT projects have performed in terms of ridership and service and how they compare to rail transit projects, we reviewed existing literature on BRT and rail transit projects’ ridership and service levels. In addition, we sent questionnaires to the sponsors of all 20 completed rail transit projects that met the criteria outlined above and compared the responses of BRT project sponsors to those of rail transit project sponsors.rail transit projects in our scope for a response rate of 90 percent. We supplemented the data collected through our questionnaires with information obtained during our site-visit interviews (from the locations listed above). We received completed questionnaires for 18 of the 20 To assess how BRT projects compare to rail transit projects in terms of capital project costs and the New Starts, Small Starts, and Very Small Starts share of funding, we used FTA project grant data compiled by FTA to identify the 55 (30 BRT and 25 rail transit) existing or planned projects that had signed grant agreements from fiscal years 2005 through February 2012. Reports on Funding Recommendations for fiscal years 2005 through 2012 to ensure that we had the most recent project cost estimates. We discussed data collection and maintenance with FTA and determined the data are reliable for our purposes. In addition to collecting data from FTA, we also reviewed relevant academic literature on BRT and rail transit capital costs and interviewed academic experts, BRT stakeholders, and select BRT project sponsors to better understand how BRT and rail transit projects compare in terms of costs. We received the New Starts data on April 6, 2012, for projects through February 2012 and Small Starts and Very Small Starts data on March 21, 2012. years after operations began.index compiled by Department of Commerce, Bureau of Economic Analysis, to convert the nominal land value into constant 2010 dollars. We did not attempt to model other factors that contribute to land values, such as broader economic conditions, other major infrastructure investments and amenities, and demographic characteristics. We used the gross domestic product price We conducted this performance audit from July 2011 through July 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Cathy Colwell (Assistant Director), Nathan Bowen, Lorraine Ettaro, Colin Fallon, Kathleen Gilhooly, Terence Lam, Matthew LaTour, Jaclyn Nidoh, Josh Ormond, and Melissa Swearingen made key contributions to this report.
BRT is a form of transit that has generated interest around the world to help alleviate the adverse effects of traffic congestion and potentially contribute to economic growth. BRT features can include improvements to infrastructure, technology, and passenger amenities over standard bus service to improve service and attract new riders. The use of federal funding for BRT in the United States has increased since 2005, when the Safe Accountable Flexible Efficient Transportation Equity Act: A Legacy for Users expanded eligibility for major capital projects under FTA’s Capital Investment Grant Program to include corridor-based bus projects. BRT projects can be funded through New, Small, and Very Small Start grants under the Capital Investment GrantProgram. GAO was asked to examine (1) features included in BRT projects funded by the FTA; (2) BRT project performance in terms of ridership and service and how they compare to rail transit projects; (3) how BRT-projects’ costs differ from rail transit project costs; and (4) the extent to which BRT projects provide economic development and other benefits. To address these objectives, GAO sent questionnaires to officials of all 20 existing BRT and 20 existing rail-transit projects that the FTA recommended for funding from fiscal year 2005 through 2012 to collect information on project features, ridership, and service and interviewed select project sponsors. GAO also reviewed documents and interviewed government, academic, and industry group officials. The U.S.Department of Transportation did not comment on the draft report. U.S. bus rapid transit (BRT) projects we reviewed include features that distinguished BRT from standard bus service and improved riders’ experience. However, few of the projects (5 of 20) used dedicated or semi-dedicated lanes— a feature commonly associated with BRT and included in international systems to reduce travel time and attract riders. Project sponsors and planners explained that decisions on which features to incorporate into BRT projects were influenced by costs, community needs, and the ability to phase in additional features. For example, one project sponsor explained that well-lighted shelters with security cameras and real-time information displays were included to increase passengers’ sense of safety in the evening. Project sponsors told us they plan to incorporate additional features such as off-board fare collection over time. The BRT projects we reviewed generally increased ridership and improved service over the previous transit service. Specifically, 13 of the 15 project sponsors that provided ridership data reported increases in ridership after 1 year of service and reduced average travel times of 10 to 35 percent over previous bus services. However, even with increases in ridership, U.S. BRT projects usually carry fewer total riders than rail transit projects and international BRT systems. Project sponsors and other stakeholders attribute this to higher population densities internationally and riders who prefer rail transit. However, some projects—such as the M15 BRT line in New York City—carry more than 55,000 riders per day. Capital costs for BRT projects were generally lower than for rail transit projects and accounted for a small percent of the Federal Transit Administration’s (FTA) New, Small, and Very Small Starts’ funding although they accounted for over 50 percent of projects with grant agreements since fiscal year 2005. Project sponsors also told us that BRT projects can provide rail-like benefits at lower capital costs. However, differences in capital costs are due in part to elements needed for rail transit that are not required for BRT and can be considered in context of total riders, costs for operations, and other long-term costs such as vehicle replacement. We found that although many factors contribute to economic development, most local officials we visited believe that BRT projects are contributing to localized economic development. For instance, officials in Cleveland told us that between $4 and $5 billion was invested near the Healthline BRT project—associated with major hospitals and universities in the corridor. Project sponsors in other cities told us that there is potential for development near BRT projects; however, development to date has been limited by broader economic conditions—most notably the recent recession. While most local officials believe that rail transit has a greater economic development potential than BRT, they agreed that certain factors can enhance BRT’s ability to contribute to economic development, including physical BRT features that relay a sense of permanence to developers; key employment and activity centers located along the corridor; and local policies and incentives that encourage transit-oriented development. Our analysis of land value changes near BRT lends support to these themes. In addition to economic development, BRT project sponsors highlighted other community benefits including quick construction and implementation and operational flexibility.
For over a decade, DOD has dominated GAO’s list of federal programs and operations at high risk of being vulnerable to fraud, waste, abuse, and mismanagement. All of DOD’s programs on GAO’s High-Risk List relate to its business operations, including systems and processes related to the management of contracts, finances, the supply chain, and support infrastructure, as well as weapons system acquisition. Long-standing and pervasive weaknesses in DOD’s financial management and related business processes and systems have (1) resulted in a lack of reliable information needed to make sound decisions and report on the financial status and cost of DOD activities to Congress and DOD decision makers, (2) adversely affected its operational efficiency in business areas such as major weapons system acquisition and support and logistics, and (3) left the department vulnerable to fraud, waste, and abuse. DOD is required by various statutes to improve its financial management processes, controls, and systems to ensure that complete, reliable, consistent, and timely information is prepared and responsive to the financial information needs of agency management and oversight bodies, and to produce annual audited financial statements. As noted above, DOD has been required since 1997 to prepare and issue annual departmentwide audited financial statements and, pursuant to various statutes, certain DOD components, including the military departments, are required to prepare and issue annual audited financial statements. DOD consolidates the component financial statements to prepare its departmentwide financial statements. As reflected in OMB requirements and DOD policy for preparation and audit of agency financial statements, DOD and its components must prepare their financial statements consistent with generally accepted accounting principles (GAAP) and establish and maintain effective internal control over financial reporting and compliance with laws and regulations. The DOD Office of Inspector General (OIG) is responsible for auditing the financial statements in accordance with GAGAS, which, among other things, requires the auditor to determine if the financial statements are fairly presented and properly supported by the agencies’ accounting records. DOD has undertaken a number of initiatives over the years, such as its Financial Improvement Initiative in 2003, to improve the department’s business operations, including financial management, and achieve unqualified (or “clean”) financial statement audit opinions. In 2005, the DOD Comptroller established the DOD FIAR Directorate to manage DOD-wide financial management improvement efforts and to integrate those efforts with transformation activities, such as those outlined in the department’s Enterprise Transition Plan, across the department. The FIAR Plan, which was first prepared in 2005, is DOD’s strategic plan and management tool for guiding, monitoring, and reporting on the department’s financial management improvement efforts. As such, the plan communicates incremental progress in addressing the department’s financial management weaknesses and achieving financial statement auditability. The plan focuses on three goals: (1) achieve and sustain unqualified assurance on the effectiveness of internal controls through the implementation of sustained improvements in business processes and controls addressing the material internal control weaknesses, (2) develop and implement financial management systems that support effective financial management, and (3) achieve and sustain financial statement audit readiness. The NDAA for Fiscal Year 2010 requires DOD to update and report on the FIAR Plan twice a year—no later than May 15 and November 15—and provide each update to OMB and Congress, among others. Consistent with prior GAO recommendations and the NDAA for Fiscal Year 2010, the DOD Comptroller issued the FIAR Guidance in May 2010 to provide standardized guidance for DOD components to follow in developing their FIPs. DOD components are expected to prepare a FIP in accordance with the FIAR Guidance for each of their assessable units. The FIPs are intended to both guide and document financial improvement efforts. While the name “FIP” indicates that it is a plan, as a component implements that plan, it must document the steps performed and the results of those steps, and retain that documentation within the FIP. Therefore, a FIP includes plans for testing controls and data, and documentation of the testing conducted, results of the testing, and any actions taken based on the results. When a component determines that it has completed sufficient financial improvement efforts for an assessable unit so that it is ready for audit, the FIP documentation is used to support the conclusion of audit readiness. A summary of the purpose of each DOD financial improvement document is provided below. The department intends to progress toward achieving financial statement auditability in five “waves” of concerted improvement activities within groups of end-to-end business processes, which are broken down into discrete units, called assessable units. Under the FIAR Plan Strategy, execution of the FIAR Guidance methodology for groups of assessable units across the five waves is intended to result in the preparation of various components’ financial statements through fiscal year 2017. The first three waves of the FIAR Plan Strategy focus on achieving the DOD Comptroller’s interim budgetary and asset accountability priorities, while the remaining two waves are intended to complete actions needed to achieve full financial statement auditability. However, the department has not yet fully defined its strategy for completing waves 4 and 5, which focus on the auditability of most of the department’s financial statements such as the Balance Sheet and the Statement of Net Cost, and significant audit areas such as equipment valuation. The FIAR Guidance states that an analysis of significant reporting entities relevant to waves 4 and 5 will be included in a future version of the FIAR Guidance. For example, in relation to wave 4, the DOD Comptroller has identified weaknesses in DOD valuation of military equipment as an obstacle to achieving auditable balances in DOD financial statements, and the May 2011 FIAR Status Plan Report states that DOD plans to resolve the matter through the deployment of new business and financial management systems, a revised definition of military equipment, and through a change to GAAP related to accounting for the cost of military equipment. DOD plans to seek a change to GAAP and implement the change by September 30, 2017, but DOD has not yet prepared a detailed time line for implementing the changes within affected assessable units (assuming that any proposed changes to GAAP would be approved). The focus and scope of each wave include the following: Wave 1—Appropriations Received Audit focuses efforts on assessing and strengthening, as necessary, internal controls and business systems involved in the appropriations receipt and distribution process, including funding appropriated by Congress for the current fiscal year and related apportionment/reapportionment activity by OMB, as well as allotment and suballotment activity within the department. Wave 2—Statement of Budgetary Resources (SBR) Audit focuses efforts on assessing and strengthening, as necessary, the internal controls, processes, and business systems supporting the budgetary- related data (e.g., status of funds received, obligated, and expended) used for management decision making and reporting, including the SBR. In addition to fund balance with Treasury reporting and reconciliation, significant end-to-end business processes in this wave include procure- to-pay, hire-to-retire, order-to-cash, and budget-to-report. Wave 3—Mission-Critical Assets Existence and Completeness Audit focuses efforts on assessing and strengthening, as necessary, internal controls and business systems involved in ensuring that all assets (including military equipment, general equipment, real property, inventory, and operating materials and supplies) are recorded in the department’s accountable property systems of record, all of the reporting entities’ assets are recorded in those systems of record, reporting entities have the right (ownership) to report these assets, and the assets are consistently categorized, summarized, and reported. Wave 4—Full Audit Except for Legacy Asset Valuation focuses efforts on assessing and strengthening, as necessary, internal controls, processes, and business systems involved in the proprietary side of budgetary transactions covered by the Statement of Budgetary Resources effort of wave 2, including accounts receivable, revenue, accounts payable, expenses, environmental liabilities, and other liabilities. This wave also includes efforts to support valuation and reporting of new asset acquisitions. Wave 5—Full Financial Statement Audit focuses efforts on assessing and strengthening, as necessary, processes, internal controls, and business systems involved in supporting the valuations reported for legacy assets once efforts to ensure controls over the valuation of new assets acquired and the existence and completeness of all mission assets are deemed effective on a go-forward basis. Given the lack of documentation to support the values of the department’s legacy assets, federal accounting standards allow for the use of alternative methods to provide reasonable estimates for the cost of these assets. To guide the components in executing the strategy, the FIAR Guidance provides a set of mandatory, standardized phases and steps that the components must follow to develop and implement their FIPs to achieve audit readiness. This step-by-step methodology delineates FIP responsibilities between the components’ management and the auditors. For each assessable unit, management’s responsibilities focus on identifying, implementing, and documenting necessary financial management improvements during the first four phases of the FIAR Methodology, and sustaining those improvements through the fifth phase. For phases six and seven, after the DOD Comptroller’s initial review and approval of a FIP supporting audit readiness, the component engages an independent auditor to first perform an examination of the FIP and if validated, then an audit of the assessable unit and finally, the entity’s financial statements. After a component concludes that an assessable unit is ready for audit, a component continues to maintain the FIP to document the validation of audit readiness by an independent auditor and the sustainability of audit readiness through ongoing efforts by the component. Throughout this report, however, we use the term “FIP” to mean the record of FIP implementation and related documentation that was used to support the conclusion of audit readiness because that is the point at which we reviewed the two selected FIPs. In instances where additional documentation was provided to us later, we indicate that such information was provided subsequent to the audit readiness conclusion. A description of each of the phases of the FIAR Methodology as set forth in the FIAR Guidance follows: Phase 1 – Evaluation and Discovery: Management documents its business and financial environment, defines and prioritizes its processes into assessable units, assesses risk and tests controls, evaluates supporting documentation, identifies weaknesses and deficiencies, and defines its audit readiness environment. Phase 2 – Corrective Action: Management develops and executes corrective action plans that identify the specific steps a reporting entity will take to resolve deficiencies, the resources required and committed, and targeted milestones and completion dates. Phase 3 – Evaluation: Management evaluates its corrective action effectiveness through testing and determines whether the entity is ready for audit. Phase 4 – Assertion: Management prepares all relevant supporting documentation and asserts audit readiness to the Office of the Under Secretary of Defense (Comptroller) (OUSD(C)) and the DOD OIG. Phase 5 – Sustainment: Management maintains audit readiness through risk-based periodic testing of internal controls utilizing the OMB Circular No. A-123, Appendix A processes and procedures, and resolves any identified weaknesses in a timely manner. Phase 6 – Validation: The DOD Comptroller and DOD OIG conduct an initial review of the FIP including management’s assertion. If the DOD Comptroller determines that management’s assertion is supported by the FIP, then an independent auditor performs an examination for the audit readiness assertion. Phase 7 – Audit: The component (reporting entity) engages an independent auditor and supports the audit of the assessable unit or the financial statements. Each wave of the FIAR Plan Strategy is comprised of numerous assessable units identified by the components, and each assessable unit must go through the seven phases of the FIAR Methodology. As shown in table 2, the DOD military components reported 73 assessable units for waves 1-3 in the May 2011 FIAR Plan Status Report. No assessable units had yet been reported for waves 4 and 5. While our focus was on the military components, the FIAR Plan Status Report also provided general information on the status of assessable units for other DOD components for waves 1-3. The May 2010 FIAR Guidance provides a reasonable methodology for the DOD components to use to develop and implement their FIPs. It details the roles and responsibilities of the DOD components, and prescribes a standard, systematic process components should follow to assess processes, controls, and systems, and identify and correct weaknesses in order to achieve auditability. The FIAR Guidance requires components to fully document the procedures carried out as they implement the FIPs, and the results, which will allow for an independent assessment of audit readiness. Overall, the procedures required by the FIAR Guidance are consistent with selected procedures for conducting financial statement audits, such as reconciling the population of transactions to be tested, conducting tests of information systems controls, and conducting internal control and substantive testing. The FIAR Guidance also requires the components to correct the deficiencies identified during testing and document the results, which is consistent with federal internal control standards and OMB guidance. While the audit strategy for waves 4 and 5 has not been completely defined yet, the same overall FIAR Methodology will likely apply to these waves as well. DOD’s ability to achieve audit readiness is highly dependent on the components’ ability to effectively develop and implement FIPs in compliance with the FIAR Guidance. The DOD Comptroller has identified critical elements of financial reporting which DOD components are expected to carry out during phases 1 through 3 of the FIAR Methodology and which closely align with procedures that are performed during an audit. Following are more details about some of the critical elements required by the FIAR Guidance. Internal control and substantive testing. DOD components are required to perform both internal control and substantive testing as part of the process to assess audit readiness. Internal control tests are performed to obtain evidence about the achievement of specific control objectives, while substantive tests are performed to obtain evidence on whether amounts reported on the financial statements are reliable. Both types of testing generally involve determining whether appropriate supporting documentation exists and is readily available. Internal control testing focuses on assessing the effectiveness of controls that would prevent or detect potential misstatements in the financial statements. For example, to test controls over operating expenses, a component would review a sample of invoices to determine if they had been properly approved for payment, typically indicated by a signature of an authorized official. Substantive testing, on the other hand, is conducted to obtain evidence on whether the amounts reported on the financial statements are reliable. For example, to test an operating expense balance, a component’s procedures would include examining invoices to determine if the amounts of the invoices matched the transaction amounts recorded in the general ledger and determining if the purchased item or service was actually received. As discussed below, a key step in testing involves reconciling the population of transactions to be tested. Reconciliation of population of transactions. To conduct both internal control and substantive testing, a sample of the data transactions is typically selected for testing. An organization must ensure that a sample is taken from, or represents, the complete population of transactions that needs to be tested. According to the FIAR Guidance, to ensure the completeness of the population, the DOD components are required to identify the population of transactions for an assessable unit, compare the total amount of those transactions to the amounts recorded in the general ledger and the financial statements, and then research and resolve any differences between the amounts prior to selecting a sample. Tests of information systems controls. Because most financial information is maintained in computer systems, the controls over how those systems operate are integral to the reliability of financial data. The components are required to identify, document, and test both general and application controls for key systems that process transactions. General controls are the policies and procedures that apply to all or a large segment of an entity’s information systems and help ensure their proper operation. The objectives of general controls include safeguarding data, protecting application programs, and ensuring continued computer operations in case of unexpected interruptions. For example, general controls include logical access controls that prevent or detect unauthorized access to sensitive data and programs that are stored, processed, and transmitted electronically. Application controls, sometimes referred to as business controls, are incorporated directly into computer applications to help ensure the validity, completeness, accuracy, and confidentiality of data during application processing and reporting. For example, a system edit used to prevent or detect a duplicate entry is an application control. Corrective action plans. The components are required to develop and execute corrective action plans to remediate any deficiencies that indicate that controls are not working and/or transaction amounts are not supported. The corrective action plans should include the solutions to be implemented to resolve the deficiencies, the identification of resources required and committed to carry out the solutions, and the targeted milestones and completion dates. Further, the FIAR Guidance states that after corrective actions have been taken, the components should perform additional testing to determine whether the deficiencies were in fact remediated. The Navy and Air Force did not adequately develop and implement their respective FIPs for Civilian Pay and Military Equipment in accordance with the FIAR Guidance. Our review of these FIPs found similar deficiencies in both of them. Specifically, our review of the FIPs found that the Navy and Air Force did not conduct sufficient control and substantive testing, and reached conclusions that were not supported by the testing results, did not complete reconciliations of the population of transactions, and did not fully test information systems controls. Also, neither component had fully developed and implemented corrective action plans to address deficiencies identified during implementation of the FIPs. In addition, the Navy did not accurately report the status of certain metrics in the November 2010 FIAR Plan Status Report. As a result of these deficiencies, neither FIP provided sufficient support for the components’ conclusions that the assessable units were ready to be audited. Navy officials stated that they were taking action to address the issues identified and planned to submit a revised FIP by March 2012. Air Force officials also indicated that they were taking action to address the issues identified. In July 2011, Air Force officials provided updates on the status of several actions that were underway or completed but, for the most part, did not provide supporting documentation for our review. Further, the actions they identified did not address all of the deficiencies that we noted. The Navy did not adequately develop and implement its FIP for civilian pay in accordance with the FIAR Guidance. Specifically, our review of this FIP found that the Navy did not (1) conduct sufficient control and substantive testing, and reached conclusions that were not supported by the testing results; (2) reconcile the population of transactions recorded in the payroll system to those in the general ledger prior to testing; (3) fully test information systems controls; (4) adequately develop and implement corrective action plans; and (5) accurately assess and report the status of its FIP work in terms of specific FIAR Plan metrics. As such, the FIP documentation did not support audit readiness for civilian pay as asserted by the Navy on March 31, 2010. Both the DOD Comptroller and the DOD OIG found many of the same issues that we did during their reviews of the FIP. Navy officials said that they are performing additional analysis and testing to address identified deficiencies and plan to submit a revised FIP by March 2012. The following paragraphs provide more details about the deficiencies we found in the Navy’s Civilian Pay FIP. Testing Was Insufficient and Did Not Support Conclusions. The Navy did not conduct sufficient internal control and substantive testing for civilian pay, as required by the FIAR Guidance. We found instances in which documentation of the Navy’s testing results was not included in the FIP and other instances in which the documentation was included but did not support the conclusion reached. For example, while the Navy concluded that internal controls were designed and operating effectively, the results of the control testing indicated that 9 of 17 controls tested were ineffective. The Navy reported that its civilian pay was ready for audit because its control environment mitigated the deficiencies of its ineffective controls; however, it did not explain how this was accomplished. Based on our analysis and discussions with Navy officials, we determined the control environment did not mitigate these deficiencies. Both the DOD Comptroller and OIG had similar comments about the control environment. In addition, the Navy concluded that system exception and change reports, which are computerized input and edit controls in the form of reports, were in general operating effectively. However, the detailed test results showed that 5 of 14 reports would not always run properly, and that one report was in fact not running properly as it was producing false positives and negatives (i.e., information that was not always accurate). The FIP indicated that no exceptions were identified as a result of the substantive testing performed. However, during our review of the testing results, we identified substantive exceptions. For example, the Navy was unable to verify the reasonableness of payroll amounts for a sample of employees because of incomplete and missing documentation, such as time and attendance reports and schedules with approved pay rates. Navy officials said that they did not pursue the missing documents because the related control test for this process had failed; therefore, they did not believe the substantive test needed to be completed nor did they consider the incomplete or missing documents to be substantive exceptions. Navy officials said that they later tested payroll transactions for another sample of employees, but did not retain the documentation because it contained personally identifiable information (e.g., social security numbers). The officials also said that they are performing additional substantive testing and will include that documentation in the revised FIP. In addition, there was no evidence in the FIP that control and substantive tests were performed for personnel benefits (e.g., payments for retirement plans and health insurance). Navy officials said that they had performed testing of personnel benefits, but that they did not retain the documentation as it contained personally identifiable information. However, as stated earlier, the FIAR Guidance requires that all FIP procedures and the results be fully documented. The officials also said that the additional payroll testing that they are performing includes personnel benefits, and that this will be documented in the revised FIP. Population of Transactions Was Not Reconciled. The Navy did not reconcile the population of transactions for civilian pay prior to conducting testing as required by the FIAR Guidance. Specifically, the Navy did not ensure that it selected samples from the complete population of payroll transactions recorded in the Defense Civilian Pay System (DCPS) because it did not reconcile all DCPS data to the Navy’s general ledger systems. Instead, it used a subset of the payroll transactions to conduct a reconciliation. The Navy stated that the transactions excluded from the reconciliation were immaterial but the rationale and support for this conclusion was not documented in the FIP at the time the Navy concluded that civilian pay was audit-ready. In addition, the Navy identified discrepancies during its efforts but did not clearly document what these discrepancies were or their resolution. For example, the Navy documented in the FIP that there were instances of missing data and high variances, but did not indicate the nature of these issues or how pervasive they were, the actions taken to resolve them, and/or whether the issues were resolved. In response to these concerns we raised, Navy officials said that they subsequently performed a reconciliation using more recent payroll transactions that resulted in insignificant unreconciled discrepancies. The officials stated that they believed these results were sufficient to ensure that a complete population was identified and that they used this population to select a sample of transactions and are performing detailed testing. Navy officials said that the results of this work will be included in the revised FIP. In addition, the Navy documented in the FIP that it was unable to reconcile the payroll accounts in its general ledgers to the DOD-wide general ledger that is used to generate the components’ and department’s financial statements. In their attempt to perform this reconciliation, Navy officials noted that they were unable to extract reliable payroll data from the DOD-wide general ledger and that the payroll account balance in the DOD-wide general ledger was greater than the total of the account balances in the Navy’s general ledgers. As a result, the Navy was unable to reconcile the population of payroll transactions to the system that ultimately produces its financial statements; in other words, it could not identify the information needed to test the civilian pay amounts included in the financial statements. The Navy officials stated they plan to address these issues as part of the SBR Financial Statement Compilation and Reporting assessable unit, which they plan to have audit-ready by the end of fiscal year 2012. Information Systems Controls Were Not Fully Tested. The FIAR Guidance requires DOD components to test system controls to ensure that they are operating as intended. However, the methodology—the Defense Information Assurance Certification and Accreditation Process (DIACAP)—that the Navy used to assess information systems controls did not address all the elements of a general controls assessment. For example, the DIACAP did not address the periodic review of (1) users’ access authorizations or privileges to ensure that they are appropriate based on assigned roles and responsibilities, and (2) automated logs of changes to security access authorizations to ensure that management is aware of any unusual activity. In addition, the Navy did not review the results of general controls assessments to ensure that the assessments covered the overall operating environment in which the systems operated. For example, any general control weaknesses in a mainframe or network that were not included in the scope of the assessment could possibly negate the effectiveness of the controls for the individual system reviewed. Because the Navy did not include all elements of a general controls assessment in its testing, it was unable to demonstrate the effectiveness of the Navy’s overall general control environment. The Navy reported in the FIP that it relied on a SAS 70 report to obtain assurance over key payroll processing controls (i.e., application controls) for the DCPS. However, that report identified significant weaknesses. Specifically, it indicated that several control activities were found to be ineffective and as a result, certain control objectives were not met for DCPS. In the FIP, the Navy stated that the particular control activities identified as ineffective in the SAS 70 report did not significantly affect Navy Civilian Pay, but it did not adequately document the rationale for this assessment. Nevertheless, we determined that several of these control activities, and related control objectives, were significant to Navy Civilian Pay and, as such, the Navy did not have reasonable assurance that its personnel and payroll data were complete, accurate, and timely processed. Corrective Actions Were Not Adequately Developed and Implemented. The Navy’s FIP did not include the information needed for corrective action plans as required by the FIAR Guidance. For the most part, the FIP did not include (1) the deficiencies to be corrected (the root cause of the exceptions), (2) the solutions to be implemented to resolve the identified deficiencies, (3) the resources needed to carry out those solutions, and (4) a schedule for timely completion of corrective actions. Navy officials stated that they are currently developing corrective action plans; however, their first priority is to develop and implement the revised FIP which will include evidence of audit readiness based on substantive procedures rather than reliance on internal controls. In effect, the Navy plans to assert audit readiness based on its testing of account balances without addressing the identified internal control deficiencies. However, the development and implementation of a corrective action plan to address such deficiencies is a requisite for improving financial management, which is one of the goals of the FIAR effort. FIP Status Was Not Accurately Reported. The FIAR Guidance requires the components to report the status of Key Control Objectives (KCO) and Key Supporting Documents (KSD) for their assessable units. However, the status for these metrics that the Navy reported in the November 2010 FIAR Plan Status Report to demonstrate audit readiness was not supported by the results of the Navy FIP work. For example,  The Status Report indicated that 95 percent of KCOs pertaining to Navy Civilian Pay were found to be effective; however, the Navy’s control testing results reflected mostly ineffective internal controls.  The Status Report indicated that 100 percent of KSDs related to Navy Civilian Pay (e.g., documentation evidencing the operation of an internal control such as properly approved time and attendance records) were found to exist. However, as discussed earlier, Navy Civilian Pay testing results indicated that incomplete and missing documentation was one of the more prevalent findings. Navy officials said that the differences we noted were due to the fact that the KCOs and KSDs required for the FIAR Plan did not exactly match or align with the Navy’s actual work. Therefore, the Navy tried to estimate the appropriate level of progress to report for the KCOs and KSDs listed in the FIAR Plan based on testing that it conducted. In the May 2011 FIAR Plan, the Navy revised its reported progress for these metrics. For example, instead of reporting that 95 percent of KCOs were effective, the May 2011 FIAR Plan indicates that 60 percent of KCOs were effective for Navy’s civilian pay. We did not assess the accuracy of these revised metrics. The Air Force did not adequately develop and implement its FIP for military equipment in accordance with the FIAR Guidance. See table 3 for the types of Air Force military equipment, and their reported quantities and values. In our review of this FIP, we found that the Air Force did not (1) conduct sufficient control and substantive testing, and reached conclusions that were not supported by the testing results; (2) reconcile the population of transactions recorded in its accountable property system to the general ledger; (3) fully test information systems controls; and (4) adequately develop and implement corrective action plans. As a result of these deficiencies, the FIP documentation did not support the Air Force’s December 2010 assertion that the military equipment assessable unit was ready to be audited. The DOD Comptroller provided initial comments based on its review of the FIP which indicated that it had found issues similar to those we identified and concluded that the Air Force had not demonstrated audit readiness for its military equipment. In addition, the DOD OIG identified similar issues and concluded that the Air Force had not complied with the FIAR Guidance in developing and implementing this FIP. Air Force officials acknowledged that they had more work to do to address the identified deficiencies and indicated that they planned to complete these corrective actions by the end of June 2011. In July, the Air Force provided updates on the status of several actions underway or completed but did not provide any supporting documentation for our review. Further, these actions did not address all of the deficiencies that we identified. The following paragraphs provide more details about the deficiencies we found in the Air Force’s FIP. Testing Was Insufficient and Did Not Support Conclusions. As described earlier, DOD components are required to conduct both internal control and substantive testing for each assessable unit. The Air Force did not perform sufficient testing to support audit readiness for the existence and completeness of various types of its military equipment. For aircraft, the Air Force judgmentally selected five sites at which to perform the testing but did not provide evidence that the conditions at these five sites were representative of all Air Force locations. Selecting sites judgmentally could be an acceptable method if the Air Force could demonstrate that the processes and controls at the selected sites were representative of all other locations not tested. For the other four categories of military equipment—ICBMs, RPAs, satellites, and pods—the FIP did not include any documentation of internal control or inventory testing for the existence and completeness of these assets. Instead, the FIP described the routine monitoring activities over these assets that are conducted for operational purposes. With regard to ICBMs, Air Force officials said that the Air Force Audit Agency will be performing inventory testing of ICBMs in fiscal year 2011. Population of Transactions Was Not Reconciled. The Air Force did not reconcile the population of transactions for military equipment prior to conducting testing as required by the FIAR Guidance. Specifically, the Air Force did not ensure that it selected testing samples from the complete population of transactions because it did not complete a reconciliation of the military equipment data recorded in its accountable property systems of record to its general ledger. When it compared the data in these systems, it found discrepancies that it did not resolve. For example, there was an unresolved difference of about $2 billion that was largely attributed to differences in both the recorded costs and accumulated depreciation of satellites. We also found that the FIP included documentation that reported different balances for aerospace vehicles. As shown in table 3, the Air Force reported a net book value of $89.5 billion for its aerospace vehicles, which was about $11 billion more than the balance used to perform the reconciliation to the general ledger. Air Force officials said that the balance for aerospace vehicles shown in table 3 is inaccurate and that the balance used to perform the reconciliation is likely more reasonable; however, there was no documentation in the FIP to support this statement. Because of the unresolved reconciling items and the discrepancies in the balances reported for aerospace vehicles, the Air Force does not have assurance that the testing done to determine audit readiness covered the complete population of its military equipment. Information Systems Controls Were Not Fully Tested. The FIAR Guidance requires DOD components to test system controls to ensure that they are operating as intended. However, the FIP did not provide support to indicate that general and application systems controls were operating effectively for the two systems that maintain accountability for the Air Force’s military equipment. For REMIS, the FIP included information about the Air Force’s conclusions regarding the effectiveness of specific controls. However, the FIP did not include any evidence of the testing performed that would allow for an independent evaluation of its work. For RAMPOD, the FIP included a list of controls to be tested, but did not provide any conclusions or any other evidence that any testing had been done. The DOD Comptroller expressed concerns similar to ours and as a result, Air Force officials stated that they would be performing additional testing of general and application controls. In July 2011, Air Force officials reported that this testing was completed but they did not provide supporting documentation for our review. Corrective Actions Were Not Adequately Developed and Implemented. The Air Force had not developed corrective action plans to address all of the exceptions identified during testing, and had not implemented any corrective actions to address these exceptions as of December 31, 2010, when it submitted its FIP, as required by the FIAR Guidance. For example, in reviewing the status of the corrective actions, we found the following:  Capital Modifications—The Air Force’s testing determined that it did not have the necessary controls in place to ensure that equipment modifications were capitalized when appropriate. The Air Force’s corrective action plan had identified the nature of this deficiency, the solution, the required resources, and targeted milestone. However, the targeted milestone noted in the FIP was January 2011—1 month after the Air Force had indicated that the assessable until was audit ready. Air Force officials indicated that they did not expect to complete this corrective action until December 2011.  Accumulated Depreciation—The Air Force’s initial testing identified discrepancies with the accumulated depreciation balance reported in REMIS. The errors included both over- and underdepreciation of assets, which, in some instances, resulted in accumulated depreciation amounts in excess of the acquisition cost of the assets. However, as of the date it indicated audit readiness, the Air Force had not identified the cause of this problem, the solution, or a time frame for implementation. Subsequent to indicating audit readiness, the Air Force said that it analyzed the issue further and that it had resolved the issue by July 2011. However, based on our review of documentation provided, the Air Force’s actions did not fully address this weakness.  REMIS ICBM Records—The FIP stated that 555 complete ICBMs were recorded in REMIS, but that only approximately 450 complete ICBMs exist at any time. The approximately 100 remaining ICBMs pertain to unassembled missile components which, according to the Air Force, should be classified as operating materials and supplies rather than military equipment. The FIP identified the solution needed to address this deficiency and stated that it must be corrected before military equipment can be ready for audit. However, Air Force officials indicated that they did not expect to complete this corrective action until the fourth quarter of fiscal year 2011, and that testing the effectiveness of the corrective action would be incorporated into the fiscal year 2012 testing efforts. DOD and its military components have established senior executive committees as well as designated officials at the appropriate levels to monitor and oversee their financial improvement efforts. These committees and individuals have also generally been assigned appropriate roles and responsibilities. (Figure 1 depicts the key organizations and positions involved in the overall FIP process and table 4 in app. II outlines their roles and responsibilities.) According to relevant criteria, monitoring should be performed continually and includes regular management and supervisory activities such as assigning qualified people with the appropriate roles and responsibilities, carrying out assigned oversight duties, and documenting the results of oversight activities. We found that Navy and Air Force officials as well as oversight committees did not effectively carry out their monitoring responsibilities for the FIPs that we reviewed. However, once the components indicated audit readiness, we found that the DOD OIG and the DOD Comptroller appropriately carried out their responsibilities for reviewing the FIPs. Based on our reviews of the Navy Civilian Pay and Air Force Military Equipment FIPs discussed earlier, the Navy and Air Force officials responsible for monitoring and oversight did not effectively ensure that the FIP work was performed in accordance with the FIAR Guidance. The FIP Directors did not ensure that their respective FIPs provided sufficient evidence to support the conclusions of audit readiness before providing the FIPs to the Assistant Secretaries—Financial Management and Comptroller (FM&C) for their signature. Neither Assistant Secretary ensured that the FIPs were sufficient before signing them to indicate audit readiness. For example, the Air Force’s FIP was signed even though the FIP stated that the deficiency in ICBM reporting, as discussed earlier, “must be addressed before the military equipment line item can be ready for audit.” In addition, committees at both the component and DOD levels did not effectively carry out their responsibilities for FIP oversight. Minutes of the components’ senior executive committee meetings did not indicate that these committees thoroughly reviewed the progress of the FIPs in addressing financial management weaknesses. With respect to the FIPs that we reviewed, the FIAR Governance Board’s activities consisted primarily of receiving status update briefings. Because neither the individual FIP managers nor the oversight committees adequately reviewed and monitored the FIPs, each of the assessable units we reviewed was deemed audit-ready even though the results did not support these conclusions. Once the components indicated audit readiness, both the DOD OIG and the DOD Comptroller carried out their responsibilities for reviewing the FIPs. The DOD OIG, which reviews FIPs concurrently with the DOD Comptroller after a component indicates audit readiness, provided comments to the DOD Comptroller on each of the FIPs. It identified many of the same, or similar, issues that we did, as discussed earlier, and concluded that the FIPs did not comply with the FIAR Guidance nor demonstrate audit readiness for the assessable units. The DOD Comptroller, which makes the final determination as to whether an assessable unit is ready for audit, also identified issues for the Navy Civilian Pay FIP similar to those discussed earlier and concluded that the Navy had not demonstrated audit readiness for its civilian pay. For the Air Force Military Equipment FIP, the DOD Comptroller provided initial comments indicating that it had found issues similar to those we identified and concluded that the Air Force had not demonstrated audit readiness for its military equipment, but it had not yet issued final comments. Recognizing that additional actions were needed to assist the components in developing and implementing their FIPS, the DOD Comptroller established a quality assurance team in January 2011 to review the components’ FIPs as they are being developed and implemented. The intent is for the quality assurance team to provide detailed feedback on the FIPs before they are formally submitted for review and validation. In addition, the DOD Comptroller developed a series of training courses to help component personnel understand and execute the FIAR Methodology. Officials from the DOD Comptroller’s Office said that the components need additional training and assistance with their FIPs because they do not necessarily have staff with the appropriate skills and qualifications to adequately carry out the procedures required by the FIAR Guidance. We believe that the DOD Comptroller’s efforts to review the FIPs as they are being developed and implemented, and to provide additional training and ongoing feedback, will improve the FIPs and thus, the components’ ability to demonstrate that assessable units are audit-ready. When the components report the progress of their FIPs inaccurately and submit FIPs to DOD that do not adequately support audit readiness, DOD—both the Comptroller and the OIG—must use resources to review unreliable or incomplete information, and components must then perform rework to reach audit readiness. Thus, the lack of adequate oversight results in an inefficient FIP process and can impact the ability of components to meet established milestones. DOD’s FIAR Guidance provides a reasonable and systematic process that DOD components can follow in their efforts to achieve audit readiness. It establishes clear priorities for the components and a road map for reaching auditability for each assessable unit. However, we found that the components did not adequately carry out the procedures required by the FIAR Guidance for the two FIPs we reviewed. Top managers involved in FIP oversight also did not properly monitor and assess the status of FIP efforts in order to make accurate decisions regarding audit readiness. As a result, both the Navy’s and Air Force’s conclusions of audit readiness for civilian pay and military equipment, respectively, were unsupported. Both the Navy and the Air Force indicated that they have initiated additional actions to address the identified deficiencies, but they did not provide supporting documentation for us to verify their actions. To achieve departmentwide audit readiness, DOD leaders will need to ensure that the components develop, implement, and document their FIPs in compliance with the FIAR Guidance. Considering the deficiencies identified in this report can help inform DOD leaders and the components as they develop and implement other FIPs to better utilize resources by minimizing rework. If the DOD components are unable to achieve interim FIAR milestones, DOD will need to consider the effect on its ability to achieve departmentwide audit readiness by September 30, 2017. We are making 13 recommendations to the Secretary of Defense to improve the development, implementation, documentation, and oversight of the department’s financial management improvement efforts. To ensure that the Navy develops and implements its Financial Improvement Plan in accordance with the FIAR Guidance, we recommend that the Secretary of Defense direct the Secretary of the Navy to put procedures in place to help ensure that the Navy’s Financial Improvement Plans include documentation that the Navy performed the following:  Sufficient control and substantive testing.  A reconciliation of the complete population of transactions for an assessable unit to the relevant general ledger(s) and to the amount(s) reported in the financial statements, including researching and resolving reconciling items.  An assessment of information systems controls that (1) addresses all relevant critical elements, and (2) for any deficiencies identified in a SAS 70 report that is relied upon, show that either mitigating controls exist or actions have been taken to address the deficiencies.  Preparation and execution of corrective action plans to address significant control weaknesses.  Assessments of the metrics (e.g., key control objectives and key supporting documents) to ensure that they are consistent with, and supported by, testing results. To ensure that the Air Force develops and implements its Financial Improvement Plan in accordance with the FIAR Guidance, we recommend that the Secretary of Defense direct the Secretary of the Air Force to ensure that the Air Force’s Financial Improvement Plans include documentation that the Air Force performed the following:  Sufficient control and substantive testing.  A reconciliation of the complete population of transactions for an assessable unit to the relevant general ledger(s) and to the amount(s) reported in the financial statements, including researching and resolving reconciling items.  An assessment of information systems controls that includes documentation of both the testing and the results.  Preparation and execution of corrective action plans to address significant control weaknesses. To ensure that other FIPs from DOD components comply with the requirements in the FIAR Guidance, we recommend that the Secretary of Defense direct the Secretary of the Army and the heads of other DOD components to consider the weaknesses identified in this report when preparing their FIPs. To improve DOD’s monitoring and oversight of FIP activities, we recommend that the Secretary of Defense direct:  The Co-Chairs of the FIAR Governance Board to ensure that the board carries out its responsibilities for identifying risks that could prevent the department from achieving its goals and ensuring sufficient documentation of FIP assessment results.  The Secretary of the Navy to ensure that all responsible parties within the Navy, including the Assistant Secretary of the Navy (Financial Management and Comptroller), carry out their responsibilities for ensuring that FIP development and implementation complies with the FIAR Guidance and that the FIP contains sufficient information to indicate audit readiness before it is signed.  The Secretary of the Air Force to ensure that all responsible parties within the Air Force, including the Assistant Secretary of the Air Force (Financial Management and Comptroller) carry out their responsibilities for ensuring that FIP development and implementation complies with the FIAR Guidance and that the FIP contains sufficient information to indicate audit readiness before it is signed. We provided a draft of this report to the Secretary of Defense and received written comments from the Under Secretary of Defense (Comptroller), which are reprinted as appendix III. Overall, DOD concurred with 10 recommendations and partially concurred with three, and identified some specific actions that are completed, underway, or planned. DOD commented that its approach of prioritizing its efforts on improving information related to budgetary resources and the existence and completeness of its assets and achieving auditability in those areas has garnered more participation and attention in the FIAR effort than in the past, and DOD described some initiatives underway to speed progress on this effort. DOD recognized that there is room for improvement in implementation of the FIP process by the military components as well as in governance and management of the process. In that regard, DOD concurred with 10 of our recommendations, and said that it is critical to continue to review how DOD applies lessons learned across the department and changes business processes to reflect those lessons. We agree that identifying and incorporating lessons learned into the FIAR process will be an important part of effective implementation, and we look forward to seeing how DOD develops a mechanism to capture and disseminate the lessons. DOD partially concurred with three other recommendations specifically related to the Navy and Air Force FIPs. DOD explained that the Navy and Air Force FIPs that we reviewed were prepared before issuance of the May 2010 FIAR Guidance and may have proceeded with a strategy that was not sufficiently supported, but that corrective actions are underway. As we discussed with DOD, although the FIPs we selected were initiated prior to issuance of the final FIAR Guidance, the issues we identified are consistent with draft FIAR Guidance as well as standard procedures for conducting a financial statement audit that we found to be incorporated into the final FIAR Guidance. On our recommendation related to improving the Navy’s process for reconciling transactions with its general ledgers, DOD partially concurred, but noted that the Navy will be unable to reconcile transaction populations until it completes its Financial Statement Compilation Process. As we report, performing reconciliations is key to properly testing financial statement amounts and therefore should be done prior to asserting audit readiness. On our recommendation related to improving the sufficiency of the Air Force’s control and substantive testing, DOD partially concurred, but noted that the Air Force based the extent of its testing procedures on its assessment of the inherent risk of the asset category, stating that ICBMs were not tested due to their limited number and extensive controls for these assets. Although the Air Force’s FIP noted that ICBMs are subject to extensive controls, the FIP did not document those controls or any tests conducted to validate the controls. Also, for RPAs, satellites, and pods, no evidence of controls was provided and no testing was done. While the FIAR Guidance allows components to use a substantive approach (versus a controls approach) when there are a limited number of items, it does not allow no testing if an area is significant or material. Also as we report, the Air Force’s basis for its judgmental selection of the locations to test aircraft was inadequate since it did not demonstrate that the processes and controls at the five selected sites were representative of all other locations not tested. In addition to partial concurrence with this recommendation, DOD commented it will further review this issue and take action as appropriate. On our recommendation related to improving the Air Force’s corrective action plans, DOD partially concurred, but noted that when the Air Force asserted auditability it did not believe that the issues being addressed by corrective actions were significant control weaknesses and, therefore, the Air Force was allowed under the FIAR Guidance to assert auditability. We believe that the Air Force reported significant control weaknesses that would have precluded an assertion of auditability; the FIAR Guidance does not allow for auditability assertions when there are unresolved material deficiencies. For example, as we report, the Air Force documented that it was unable to reconcile its population of transactions to its financial statements, and as a result, did not have assurance that the testing done covered the population of military equipment. Also, the Air Force reported errors in its depreciation amounts and that accurate depreciation amounts were “essential” and Air Force military equipment “will not be ready for audit” if not corrected. The Air Force also reported that the cause of errors was “unknown.” Although the Air Force subsequently withdrew part of its FIP related to some of these issues, we evaluated the content of the FIP that the Air Force asserted was audit- ready. In addition to partial concurrence with this recommendation, DOD commented it will further review this issue and take action as appropriate. As agreed with your offices, we plan no further distribution of this report until two days from its date, unless you publicly announce its contents earlier. At that time, we will send copies of this report to the Secretary of Defense; the Under Secretary of Defense (Comptroller); the Secretary of the Navy; the Secretary of the Air Force; the Deputy Chief Management Officer; the Chief Management Officer of the Navy; and the Chief Management Officer of the Air Force. This report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact Asif A. Khan at (202) 512-9095 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Our objectives were to determine whether (1) the Financial Improvement and Audit Readiness (FIAR) Guidance provided a reasonable methodology for the Department of Defense (DOD) components to develop Financial Improvement Plans (FIP), (2) the DOD components had adequately developed and implemented selected FIPs in accordance with the FIAR Guidance, and (3) DOD is adequately monitoring and overseeing the FIP process. To address the first objective, we analyzed the FIAR Guidance using relevant criteria such as the GAO and President’s Council on Integrity and Efficiency (PCIE) Financial Audit Manual (FAM); the Federal Information System Controls Audit Manual (FISCAM); and Office of Management and Budget (OMB) Circular No. A-123, Management’s Responsibility for Internal Control, Appendix A, Internal Control over Financial Reporting. The FAM provides a methodology to perform financial statement audits of federal entities in accordance with professional auditing and attestation standards and OMB guidance. The FISCAM provides a methodology for performing information system control audits in accordance with generally accepted government auditing standards. The OMB Circular No. A-123 provides guidance to Federal managers on improving the accountability and effectiveness of Federal programs and operations by establishing, assessing, correcting, and reporting on internal control. Appendix A of OMB Circular No. A-123 provides a methodology to assess and report on agencies’ internal controls over financial reporting. We also interviewed agency officials at the Office of the Under Secretary of Defense (Comptroller) (OUSD(C)) FIAR Directorate’s office, which developed the FIAR Guidance, and at the Navy and Air Force to obtain explanations and clarifications as a result of our analysis of selected FIPs. To address the second objective, we selected FIPs for two assessable units (Navy Civilian Pay and Air Force Military Equipment) that were scheduled to assert audit readiness in 2010 and were within wave 2 (i.e., Statement of Budgetary Resources) and wave 3 (i.e., Existence and Completeness of Mission Critical Assets) since these waves reflect DOD’s priority focus areas. Using the FIAR Guidance, we analyzed the documentation included in the FIPs, such as process flows, control assessments, test plans, test results, and corrective action plans. We did not perform separate audit procedures to assess the effectiveness of the controls or the completeness or accuracy of the Navy civilian pay amounts or the Air Force military equipment. We interviewed the Navy and Air Force FIP directors to obtain explanations and clarifications as a result of our evaluation of the documentation. To address the third objective, we analyzed relevant documentation, such as the FIAR Guidance, FIAR Plan Status Reports, and committee charters and meeting minutes, to identify the entities and officials responsible for monitoring and oversight as well as their roles and responsibilities. We also interviewed officials that play a key role in the monitoring and oversight process, such as Army, Navy, and Air Force officials from the offices of Financial Management and Comptroller and the Deputy Chief Management Officers and DOD officials from the Office of the Under Secretary of Defense (Comptroller), to clarify our understanding of these entities and officials’ roles and responsibilities. We then analyzed this information using elements of monitoring discussed in the FAM; the Implementation Guide for OMB Circular No. A-123, Appendix A; Standards for Internal Control in the Federal Government; the Internal Control Management and Evaluation Tool; the COSO Guidance on Monitoring Internal Control Systems; and the FIAR Guidance. We conducted this performance audit from May 2010 to September 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following table summarizes the roles and responsibilities of key individuals and committees involved in monitoring and overseeing the FIP process. In addition to the contact named above, the following individuals made key contributions to this report: Abe Dymond, Assistant Director; Paul Foderaro, Assistant Director; J. Mark Yoder; Kristi Karls; Michael Bingham; Carmen Harris; Adrienne Walker; David Yoder; Jason Kelly; and Jason Kirwan.
The Department of Defense (DOD) has been required to prepare audited annual financial statements since 1997 but to date, has not been able to meet this requirement. The National Defense Authorization Act of Fiscal Year 2010 mandated that DOD be prepared to validate [certify] that its consolidated financial statements are audit-ready by September 30, 2017. In May 2010, DOD issued its Financial Improvement and Audit Readiness (FIAR) Guidance to provide a methodology for DOD components to follow to develop and implement their Financial Improvement Plans (FIPs) for achieving audit readiness. The DOD FIP is a framework for planning and tracking the steps and supporting documentation. GAO was asked to assess the FIP methodology provided in the FIAR Guidance, the development and implementation of selected components' FIPs, and DOD's monitoring and oversight of the FIP process. To do this, GAO analyzed the FIAR Guidance, reviewed two selected FIPs--Navy Civilian Pay and Air Force Military Equipment--and reviewed relevant documentation and interviewed DOD and component officials.. The FIAR Guidance provides a reasonable methodology for DOD components to use in developing and implementing their FIPs. The Guidance details the roles and responsibilities of the DOD components, and prescribes a standard, systematic process to follow to assess processes, controls, and systems. Overall, the procedures required by the FIAR Guidance are consistent with selected procedures for conducting a financial audit, such as testing internal controls and information system controls. The Guidance also requires components to take actions to correct any deficiencies identified during testing and document the results. DOD's ability to achieve departmentwide audit readiness is highly dependent on its military components' ability to effectively develop and implement FIPs in compliance with the FIAR Guidance. The Navy and Air Force did not adequately develop and implement their respective FIPs for Civilian Pay and Military Equipment in accordance with the FIAR Guidance. GAO found similar deficiencies in both FIPs. For example, internal controls and information systems controls were not sufficiently tested or documented, and conclusions reached were not supported by the testing results. In addition, neither component had fully developed and implemented corrective action plans to address deficiencies identified during implementation of the FIPs. As a result, the FIPs did not provide sufficient support for the Navy's and Air Force's conclusions that Civilian Pay and Military Equipment were ready to be audited. DOD and its military components have assigned to senior executive committees and designated individuals appropriate oversight roles and responsibilities for their financial improvement efforts. However, neither oversight committees nor Navy and Air Force officials effectively carried out their oversight responsibilities for the two FIPs, which did not support the components' conclusions of audit readiness. However, once the components indicated audit readiness, both the DOD Office of Inspector General and the Undersecretary of Defense (Comptroller) performed reviews and concluded that the FIPs did not comply with the FIAR Guidance and did not demonstrate audit readiness. The lack of adequate oversight results in an ineffective FIP process and can impact the ability of components to meet established milestones. If the components are unable to achieve interim milestones, DOD will need to consider how these factors could affect its ability to achieve departmentwide auditability by the end of fiscal year 2017. GAO recommends that the Secretary of Defense take various actions to improve the development, implementation, documentation, and oversight of DOD's financial management improvement efforts. DOD generally concurred with the recommendations and commented on actions being taken to implement them.
Land exchanges—trading federal lands for lands owned by corporations, individuals, or state or local governments that are willing to trade—are used by federal land management agencies, such as the Park Service, as a tool for acquiring nonfederal land and disposing of federal land. The Potomac Yard exchange was conducted under the Park Service’s land exchange authority. Under this authority, the Park Service may convey federal land (or interests therein) over which it has jurisdiction and that it deems suitable for exchange or other disposal, and the agency may acquire nonfederal land (or interests therein) that lies within park boundaries or areas under park jurisdiction. Exchanged lands must be located in the same state and be approximately equal in value; if their values are not approximately equal, then the difference may be eliminated with a cash payment. Potomac Yard lies in northern Virginia near Ronald Reagan Washington National Airport. According to the appraisals, it covers about 380 total acres: about 290 acres in the city of Alexandria and about 90 acres in Arlington County. Figure 1 identifies the location of Potomac Yard and the parcels involved in the exchange. The Park Service’s involvement with Potomac Yard began with the establishment of the Parkway in 1930. In 1938, the Department of the Interior and the developer agreed to exchange interests in several Potomac Yard parcels that both parties claimed to own, including the Arlington parcel. As part of this agreement, the developer retained title to the Arlington parcel while Interior obtained a legal restriction—referred to as an indenture—that prohibited the developer from using the parcel for any purpose other than a rail yard. In 1970, Interior’s Park Service and the developer agreed to exchange interests in land adjacent to the Parkway, including the Alexandria parcel. As part of this agreement, the Park Service gave the developer the right to access the Alexandria parcel from the Parkway; without this access right, the parcel’s potential development would have been limited by the access provided by existing roads. In return, the developer gave the Park Service a commitment to build an interchange—bridge, ramps, and connections— on the Parkway that would provide this access. Since then, the developer has proposed several redevelopment options for Potomac Yard, which have at times been contentious. For example, in 1987 the developer filed a development plan for the Alexandria parcel consisting of about 2.5 million square feet (mmsf) of commercial development—office and retail space. When the city did not approve the plan, the developer filed a lawsuit and in 1991 obtained a court order directing the Alexandria City Council to approve the plan (with slight modification). In 1992, the city created its own plan for Potomac Yard, specifying residential development as the Alexandria parcel’s sole use. Over the next few years, the developer discussed options with Alexandria and others and did not proceed with the court-ordered development. In 1997, the Park Service and the developer informally agreed to the general framework of the Potomac Yard exchange—that is, they agreed to the interests that would be exchanged but not the values of those interests—and at about the same time, the city and the developer agreed to a predominantly residential development on the Alexandria parcel. After these agreements were reached, the developer asked the court to vacate the 1991 order, stating that it preferred the development allowed by the city to that allowed under the order. In 1998, the Park Service and the developer signed the preliminary exchange agreement; however, the two sides did not reach agreement on the values of the land interests until after the exchange appraisals were completed in 1999. Federal appraisal standards were established in 1973 to promote uniformity in the appraisal of real property among the various agencies acquiring property—both by direct purchase and condemnation—on behalf of the federal government. The standards require that land (or land interests) acquired by the federal government be appraised at fair market value. According to the standards, fair market value is defined as the amount for which a property would be sold—for cash or its equivalent— by a willing and knowledgeable seller with no obligation to sell, to a willing and knowledgeable buyer with no obligation to buy. Determining the fair market value requires an appraiser to first identify the property’s “highest and best use,” which is defined as the use that is physically possible, legally permissible, financially feasible, and maximally profitable for the owner. When the federal government acquires a partial or restrictive land interest—such as an indenture or building restrictions—federal appraisal standards show preference for using a “before and after” method to value the interest. In this method, an appraiser estimates the value of the whole property before the transaction and reduces it by the value of the property remaining in private ownership after the transaction is completed. The resulting value becomes the interest’s estimated fair market value. The standards explicitly allow for the application of professional judgment in the development of a fair market value estimate. According to the standards: “The appraiser should not hesitate to acknowledge that appraising is not an exact science and that reasonable men may differ somewhat in arriving at an estimate of the fair market value.” The Congressional Budget Office reiterated this assessment in a 1998 study, calling real estate appraisals “a mix of science and art.” The Park Service has policies and procedures for land exchanges that include obtaining appraisals to value all federal and nonfederal land involved in an exchange. The Park Service’s policies for appraisals require an agency appraiser to review the appraisals to ensure that the reported value estimate is reasonable and based on sound valuation concepts. The appraisals incorrectly valued the land interests that were exchanged because they relied upon unrealistic assumptions when valuing one parcel and provided an inadequate assessment of the other. Our analysis of the appraisals indicates that the developer could have owed the Park Service more than $15 million rather than the Park Service owing the developer $14 million, as summarized in appendix II. For the Alexandria parcel, the appraiser was instructed to assume a high level of commercial development in assessing the impact of the development restriction imposed by the Park Service; as a result, the appraiser determined that the developer would incur a loss of $26.6 million. In addition, the appraiser was instructed to use a cost of $8.5 million for the interchange—a figure agreed to by the Park Service and the developer—rather than appraise it. Because this appraisal assumed a level of development that was not shown to be reasonably probable, our review appraiser determined that it did not conform to federal appraisal standards. For the Arlington parcel, the appraiser did not consider all of the additional costs the developer would have faced, had the indenture stayed in place, or the additional development opportunities that would have resulted from the indenture’s removal. As a result, he undervalued the indenture at $6.5 million; however, the appraisal did not provide enough information for us to reliably estimate the indenture’s value. Furthermore, the appraiser determined that the restrictions would have caused a $2.4 million loss to the developer, even though zoning ordinances already restricted development. Despite these problems, our review appraiser determined that the Arlington appraisal conformed to federal appraisal standards. The Park Service and the developer jointly instructed the appraiser to reach an appraised value of the Park Service’s development restriction— which limited development to residences and neighborhood retail uses— by (1) estimating the value of the parcel without the restriction and assuming a high level of development (the “before” value), (2) estimating the value of the parcel with the restriction and assuming a lower level of development (the “after” value), and (3) calculating the difference. The instructions also provided the appraiser specific levels of development to use in this calculation: the directed high level of development was 1.5 mmsf of office space, 25,000 square feet of retail space, and 232 townhomes; the directed low level of development was no office space, 10,000 square feet of retail space, and 200 townhomes. According to federal appraisal standards, an appraiser must develop an opinion of the best use for the property being appraised in each scenario. Furthermore, in determining fair market value, appraisers must show that the “before and after” scenarios are legally permissible and reasonably probable. In other words, there must be good reason to assume that the development could be built under current restrictions (such as zoning) or a high probability that the restrictions would be changed. For the Alexandria parcel, the instructions directed the appraiser to assume a high level of development in the “before” scenario, stating: “Development scenarios presented must be assumed by the appraiser to be physically feasible and legally permitted.” However, as the developer stated in a letter to the Park Service prior to issuance of the instructions, this high level of development “is not . . . in compliance with existing zoning regulations, and is not currently ‘legally permissible.’” The developer further noted that during the course of negotiations with the Park Service, the developer “made certain” of this by asking the court in 1997 to vacate the order directing the city to approve about 2.5 mmsf of development. Nevertheless, both the Park Service and the developer believed that the assumed high level of commercial development was appropriate. The Park Service, in a March 2000 letter responding to questions from the Chairman of the House Committee on Resources, wrote that the developer’s right to access the parcel from the Parkway provided a sound basis for the assumption that the parcel could physically support intensive development. In addition, the Park Service noted that the assumed high level of development (about 1.5 mmsf) represented a significant reduction from the court-ordered level (about 2.5 mmsf) and must be considered as a viable alternative, even though the court order was no longer in effect. The Park Service further noted that the potential availability of commuter rail facilities to serve the parcel supported the conclusion that a high-density development was the highest and best use. Similarly, the developer told us that the assumed high level of development was reasonable because it was less development than had been specified under the court order. The developer indicated that both parties to the exchange stipulated the assumed high development level as one of the primary principles of the exchange framework. Following the instructions to use the assumed development levels, the appraiser did not evaluate whether the current zoning restrictions, which allow only residential development, might be altered to allow the “before” scenario’s high level of commercial development. The appraiser determined that the value of the Park Service’s restriction on the parcel’s development resulted in a loss of $26.6 million to the developer, which is the difference in the value of the “before” development ($31.7 million) and the “after” development ($5.1 million). However, our review appraiser found that the appraisal did not show that the “before” development had a reasonable probability of being built and concluded that the market would not pay a premium for the possible increment if the probability of rezoning were low. If the restriction did not diminish the development that would have reasonably occurred on the parcel, it would have no market value and the Park Service should not have given the developer any credit for it. The Park Service also obtained a no-development restriction on a 15-acre portion of the parcel adjacent to the Parkway. The appraiser determined— and our review appraiser agreed—that the restriction had no market value because a prior restriction under Virginia state law already precluded construction on the 15 acres. Therefore, the building restriction had no material impact on the developer. The Park Service’s chief appraiser determined that the appraiser’s methodology was reasonable, concluded that the Alexandria appraisal met the federal appraisal standards, and approved it for Park Service’s use. However, our review appraiser determined that the appraisal did not conform to all federal appraisal standards because it did not analyze the reasonableness of the “before” level of development. Furthermore, the appraisal did not clarify that a value based on an unrealistic level of development might differ from the fair market value of the property. Our review appraiser concluded that the instructions provided by the Park Service and the developer for the “before” development scenario ultimately led to the appraisal’s not conforming to federal appraisal standards because of issues related to the reasonableness of the highest and best use development level prescribed by the instructions. As part of the exchange, the developer bought out its 1970 commitment to the Park Service to construct an interchange on the Parkway, for $8.5 million. This figure is an estimate of the cost of constructing the interchange—it is not an estimate of market value that was prepared by the appraiser. The Federal Highway Administration was asked by the Park Service to prepare an initial estimate of the construction costs and determined them to be $12 million; an engineering firm hired by the developer revised this estimate, using different assumptions, to $8.5 million. The Park Service and the developer agreed to use this figure in the exchange before they sought the appraisals and then instructed the appraiser to use this figure to calculate the total amount owed to the Park Service in the exchange, by adding it to the appraised value of the Arlington parcel’s indenture. Our analysis also includes this figure as an amount owed to the Park Service. The appraiser faced two valuation determinations for the Arlington parcel. He needed to determine (1) the value of a restriction (indenture) owned by the Park Service that precluded office/retail/residential development on the parcel and (2) the value of other development restrictions imposed by the Park Service once the indenture was lifted. The appraiser applied the “before and after” methodology when making these valuation calculations. Unlike the Alexandria parcel, the appraisal instructions for the Arlington parcel did not provide the appraiser levels of development to use in his analysis of the indenture’s fair market value and the restrictions. Instead, the appraiser relied on his professional judgment to estimate the levels of development likely to occur with or without the indenture, and with or without the other development restrictions. In estimating the Arlington parcel’s “before” value—with the indenture lifted—the appraiser determined that the developer could reasonably obtain new zoning that would allow the construction of about 1.9 mmsf of office space on the parcel. In estimating the “after” value—with the indenture in place—the appraiser noted that Arlington County allows developers to shift development density from one parcel to another. The appraiser determined that, had the indenture remained in place, the developer would have pursued this option and the county would have allowed the shift of 1.9 mmsf from the Arlington parcel to a smaller adjacent parcel (which was vacant and zoned for about 1.1 mmsf of office space). This shift would have resulted in the development of 3.0 mmsf on the adjacent parcel, and the appraiser determined that this parcel would likely have been rezoned to accommodate the additional development. Therefore, the appraiser found that the developer could have constructed the 1.9 mmsf of development associated with the Arlington parcel with or without the indenture in place. The appraiser calculated the indenture’s value as the difference in the estimated cost of (1) building the 1.9 mmsf of office space on the Arlington parcel and (2) shifting this same square footage and combining it with 1.1 mmsf of office space on the adjacent parcel. The appraiser estimated that the developer would have had to spend $6.5 million more to construct the 1.9 mmsf on the adjacent parcel than on the Arlington parcel, to provide another level of underground parking to accommodate the additional development on the smaller parcel. While our review appraiser agreed that the appraisal’s methodology was reasonable, he disagreed with the appraiser’s conclusion that the indenture’s value should be this single estimated cost. The developer would have likely incurred additional costs had it shifted the 1.9 mmsf to the adjacent parcel. We found three main reasons why the appraisal understated the value of the indenture: According to the appraisal instructions, the adjacent parcel contained 17 acres; the appraiser assumed that all 17 acres were available for development. Our review appraiser examined the developer’s site maps and determined that only 10 acres of the adjacent parcel were available for development, because 4 acres fell under railroad rights-of-way and another 3 acres were not contiguous. Shifting 1.9 mmsf of density to the remaining 10-acre site would have likely cost the developer more than the appraiser’s $6.5 million estimate. For example, the appraiser determined that the developer would have to build three levels of underground parking, assuming that 98 percent of the 17-acre parcel were excavated; however, information in the appraisal indicates that five levels of underground parking would be needed if 98 percent of the 10-acre site were excavated. The appraiser told us that each additional level of underground parking increases the construction costs substantially over previous levels because of the additional expenses associated with building and strengthening the garage walls. Our review appraiser noted that—while the appraisal’s cost concepts as applied to parking may exceed the detail the market would know or care about—the more severe the excavation needs to be, the more the market would consider it. The appraiser noted that the value of the 3.0 mmsf would likely decrease if the developer were unable to build on the full 46 acres, because higher- density projects generally take longer to sell and obtain lower prices per square foot. He also noted that any deficit could be offset by the lower costs of concentrating the infrastructure needed for the 3.0 mmsf on the smaller site. However, because the area actually available to accept the additional 1.9 mmsf is about 40 percent smaller than he assumed—10 acres rather than 17—the resulting development would have been significantly more dense than he determined. Our review appraiser noted that putting all of this density on the smaller site raises questions of feasibility. At a minimum, we believe that the potential lower revenues from such a dense development could eclipse any potential cost savings that the developer might realize. The appraiser did not include the impact on the indenture’s value of the developer’s shifting an additional 1.1 mmsf of development density to the Arlington parcel and the adjacent parcel. A 1993 agreement between the developer and Arlington County allows this shift if the Park Service indenture is lifted. The appraiser reviewed the agreement and found it to be weak; interviewed the county’s planning director, who suggested that the county was unlikely to approve this shift (which would result in a mixed-use development of more than 4.0 mmsf) owing to traffic-related issues; and reported finding no reliable evidence to conclude that this shift would take place. In our view, the possibility of this density shift is at least as strong as the possibility of shifting density from the Arlington parcel to the adjacent acreage, because an existing legal agreement with the county allows it. If the indenture remained in place, the developer would lose the option of shifting the 1.1 mmsf. Because the developer cannot shift this density without the removal of the indenture, the appraiser should have assigned some additional value to the indenture. Considering these factors, we agree with our review appraiser’s conclusion that the indenture should have been assigned a higher value than $6.5 million. However, the appraisal does not provide enough information for us to reliably estimate the indenture’s value. As a condition of having the indenture lifted, the developer agreed to restrict its construction on six areas within or adjacent to the parcel by one or more of the following factors: (1) building height; (2) proximity to the Parkway; and (3) type of building (for example, office, retail, or residential). In valuing these restrictions—land interests the Park Service would acquire in the exchange—the appraiser determined that restrictions in four areas would limit the amount and type of development that could be built, and that restrictions in the other two areas would not affect the developer since zoning already limited building heights. The appraiser estimated the potential economic impact of the restrictions—that is, reductions in market values—in each of these four areas, tallied these reductions, and concluded that the restrictions would have caused a $2.4 million loss for the developer. Our analysis shows that the appraisal overvalued the restrictions. The appraiser valued the areas as if they were six separate parcels, rather than analyzing the financial impact of the restrictions on the entire parcel’s value, as required by federal appraisal standards. Our review appraiser further determined that the building restrictions on the Arlington parcel, in aggregate, had no detrimental impact on the parcel’s value because zoning and other building restrictions limiting development predated the Park Service-imposed restrictions. Our review appraiser noted that the restrictions had the potential to affect a project’s design or placement but concluded that they would probably have no impact, in aggregate, on the development. Thus, the building restrictions would not have resulted in a loss for the developer because they did not diminish the development opportunities available on the parcel and adjoining acres. Therefore, the restrictions should have been assigned no value. The Park Service’s chief appraiser determined that the appraiser applied a reasonable methodology, concluded that the Arlington appraisal met the federal appraisal standards, and approved it for Park Service’s use. Although the appraisal understated the value of the indenture and overstated the value of the Park Service’s restrictions, our review appraiser agreed that the appraisal conformed to federal standards. Although the Potomac Yard exchange helped to resolve years of dispute when it was completed in March 2000, the Park Service could have received more than $15 million from the developer—rather than owing the developer $14 million—if the exchanged interests had been appropriately valued. As a federal agency, the Park Service has a responsibility to protect federal taxpayers’ interests when it acquires or conveys land interests. However, the Park Service did not do so when it instructed the appraiser to derive a value for development on the Alexandria parcel that was not shown to be reasonably probable, or when it used an appraised value on the Arlington parcel that understated the worth of the Park Service’s interests. Consequently, the Park Service gave the developer credit for losses that might not have realistically occurred and did not receive enough credit for allowing the developer to develop the Arlington parcel. However, the transaction is now fully executed and—as in similar situations when a government agency pays too much for an item under a contract—it is unlikely that the Park Service can recover any funds. We provided the Department of the Interior with a draft of this report for its review and comment and, in light of the then-pending lawsuit, also provided a copy to the Department of Justice. Interior expressed three main concerns in its response; Justice did not provide comments. First, Interior commented that the report discussed issues that were the subject of ongoing litigation and that GAO’s policy is to avoid addressing matters pending in litigation. Our report disclosed that a competing developer filed a lawsuit protesting the exchange and, for this reason, lawyers representing the Park Service and the developer advised representatives from the agency and the developer not to meet with us during our review. While GAO is aware of the sensitivity of addressing issues in litigation, we decide whether to continue our involvement on a case-by-case basis. In this case, the Chairman of a congressional committee with jurisdiction expressed serious concerns about whether the exchanged land interests had been appropriately valued, and we proceeded with our review consistent with our authorities and responsibilities to support the Congress and the flexibility inherent in our policies. The court dismissed the lawsuit for lack of standing in February 2001. Second, Interior disagreed that the taxpayers’ interest was not well protected in the exchange and asserted the exchange received overwhelming public and local support. We acknowledge that the exchange helped to resolve long-standing and contentious community development issues in Alexandria and Arlington, and that for this reason the local governments and other parties supported the exchange. However, our review focused on the appraised values of the exchanged land interests. Our review found that the appraisals incorrectly valued the exchanged interests, and we concluded that the Park Service could have received more than $15 million in the exchange rather than owing the developer $14 million. Had these funds been received, federal taxpayers could have benefited more from the exchange than they did. We clarified the report title to reflect our emphasis on federal rather than local interests. Third, Interior questioned the reliability of the report because, in Interior’s view, it relies exclusively on the work of a review appraiser who in effect conducted a reappraisal of the Alexandria parcel’s development restriction even though he is not licensed in Virginia and did not inspect the property. Interior stated that it firmly believes the Alexandria appraisal is sound under applicable law and appraisal standards. We disagree with Interior’s assertions. Our report is reliable and based on information that we verified with the Park Service, the developer, the developer’s lawyers, and the appraiser. Specific points follow: Our report incorporates the results of a desk review of the appraisals, but it does not rely exclusively on these results. Our staff independently read and analyzed the appraisals, federal appraisal standards, and other relevant documents, to develop this report; this review is the most recent of several of our examinations of appraisals used in federal land transactions. We clarified the report to better distinguish our review appraiser’s work and our analysis. The appraiser we contracted with to conduct a desk review has over 40 years’ experience as a professional appraiser; he is affiliated with the Appraisal Institute and other professional real estate associations; and he is licensed as a certified general appraiser in the state of Colorado. Although under Virginia law, persons who appraise properties within Virginia must be licensed by the state, this law does not apply to those who provide consulting services that are not appraisals. The appraiser we contracted with conducted a desk review and did not reappraise either parcel. Appraisers conducting a desk review are not expected to visit the property that was appraised. In our review appraiser’s desk review of the Potomac Yard appraisals, he checked the arithmetic, considered the appropriateness of the methods and techniques used, and evaluated the reasonableness of the conclusions reached. Regarding the Alexandria parcel’s development restriction, our review appraiser found that the appraised value was not based on a credible analysis because there was no determination that a reasonable probability existed that the parcel would be rezoned to accommodate the high level of commercial development that was assumed in the “before” scenario. For this reason, our review appraiser concluded that the Alexandria appraisal did not conform to all federal appraisal standards and likely overstated the fair market value of the development restriction. The full text of Interior’s letter is in appendix III. We conducted our review from April 2000 through February 2001 in accordance with generally accepted government auditing standards. Details of our scope and methodology are discussed in appendix I. As requested, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time we will send copies to the Honorable Gale A. Norton, Secretary of the Interior, and the Honorable Denis Galvin, Acting Director of the National Park Service. We will also send copies to other appropriate congressional members and make copies available to others upon request. If you or your staff have any questions about this report, please call me at 202-512-3841. Key contributors to this report are listed in appendix IV. To examine whether the appraisals the National Park Service used for the Potomac Yard land exchange appropriately valued the interests that were exchanged, we reviewed (1) documents associated with the exchange, (2) the Alexandria and Arlington appraisals, (3) the Park Service’s review of those appraisals, and (4) federal appraisal standards. Because of a pending lawsuit filed by another developer protesting this exchange, the U.S. Attorney’s Office advised the Park Service in June 2000 that its personnel should not agree to interviews with us. A Park Service representative did give us a tour of the Alexandria and Arlington parcels and responded to our questions about the sites; however, we were unable to interview representatives from the Park Service during most of our review. Similarly, representatives of the developer told us in June 2000 that they had also been advised not to agree to interviews with us. However, after certain documents had been filed in the lawsuit, the developer initiated a meeting with us in October 2000. The appraiser hired by the developer agreed to talk with us, and we met with him to better understand the appraisals and his analyses. After we completed our fieldwork, we met with Park Service officials, the developer, the developer’s lawyers, and the appraiser to verify the factual accuracy of the data we obtained; we then provided the Park Service and the Justice Department a draft of this report for review. The court dismissed the lawsuit for lack of standing in February 2001. Additionally, we contracted with Mr. Peter D. Bowes—an independent and certified appraiser in Denver, Colorado, who has over 40 years of experience in appraising properties such as vacant urban land and has worked with various government entities—to conduct a desk review of the appraisals. His review included his professional opinion on whether the appraisals appropriately valued the land interests that were exchanged and complied with federal appraisal standards. A desk review is not a reappraisal, and he did not visually inspect either the Alexandria or the Arlington parcel. In addition, he did not confirm details contained in the appraisal reports, add new data, or talk with the Park Service or the developer. We performed our work from April 2000 through February 2001 in accordance with generally accepted government auditing standards. In the Potomac Yard land exchange, the appraiser reported that the Park Service owed the developer $14 million (which the developer waived). As shown in table 1, this figure represented the difference between (1) the appraised values of the development opportunities given up by the developer (totaling $29 million) and (2) the estimated cost of the interchange (a figure that was not appraised) and the appraised value of the indenture given up by the Park Service (totaling $15 million). In our view, the appraisals overestimated the value of the development opportunities given up by the developer—by as much as the full appraised values—and underestimated the value of the indenture—by an amount we could not determine. As shown in table 2, the developer could have owed the Park Service more than $15 million, including the $8.5 million interchange figure. In addition, Jay Cherlow, Doreen Feldman, Susan Irwin, Diane Lund, Jonathan S. McMurray, Cheryl Pilatzke, Susan Poling, Carol Herrnstadt Shulman, and Dan Williams made key contributions to this report.
Settling 30 years of sometimes acrimonious dispute, the National Park Service completed an exchange of land interests on two vacant parcels of land in Potomac Yard in March 2000. However, the Park Service could have received more than $15 million from the private developer--rather than owing the developer $14 million--if the exchanged interests had been appropriately valued. As a federal agency, the Park Service has a responsibility to protect federal taxpayers' interests when it acquires or conveys land interests. Yet, the Park Service did not do so when it instructed the appraiser to derive a value for development on the Alexandria parcel that was not shown to be reasonably probable, or when it used an appraised value on the Arlington parcel that understated the worth of the Park Service's interests. Consequently, the Park Service gave the developer credit for losses that might not have realistically occurred and did not receive enough credit for allowing the developer to develop the Arlington parcel. However, the transaction is now fully executed--as in similar situations when a government agency pays too much for an item under a contract--it is unlikely that the Park Service can recover any funds.
Although federal funding may be available to obligate for one-year, multiple years, or until expended (no-year), only accounts with multi-year or no-year funds may carry over amounts that remain legally available for new obligations from one fiscal year to the next. Carryover balances are composed of two elements: (1) unobligated funds and (2) obligated funds for which payment has not yet been made. While attention is focused on the unobligated portion, looking at both elements provides a fuller reflection of what is happening in an account. It also provides insights into opportunities for potential budgetary savings. Federal spending is categorized as “discretionary” or “mandatory.” This distinction refers to the way budget authority is provided. Discretionary spending is controlled through annual appropriations acts (plus in some cases supplemental appropriations acts), which may provide multi-year or no-year funds. Under certain legal authorities, some discretionary accounts are funded entirely, or in part, through the collection of user fees, interagency transactions, or dedicated excise taxes. Specifically, the level of discretionary funding in an account is provided by, and controlled through, the annual appropriations process. In contrast, mandatory spending accounts, which include “entitlements,” receive budget authority through laws other than appropriations acts. Many mandatory programs receive budget authority for an unspecified amount made available as the result of previously enacted legislation, with no need for further legislative action. For example, Medicare, veterans’ pensions, Social Security, and federal crop insurance are mandatory programs. By statute, funding to these programs is driven by eligibility rules and benefit formulas, which means that funds are made available as needed to provide benefits to those who are eligible and wish to participate. Over the last several years, mandatory accounts have represented an increasing share of total unobligated balances across the federal government. Figure 1 shows how budget authority is provided and how authorized funds may accumulate, ultimately leading to a carryover balance. In any given year, the total budgetary resources available in an account consist of unobligated funds carried forward from previous years, if applicable, plus funds newly available for obligation in that fiscal year. Based on the level of total available budget authority in the account, agencies may then obligate funds throughout the fiscal year. An obligation is a definite commitment that creates a legal liability of the federal government for the payment of goods and services. For example, an agency incurs an obligation when it places an order, signs a contract, awards a grant or purchases a service. Payment may be made immediately or in the future. Only when funds are actually disbursed for payment—that is, the obligation is liquidated—does an obligation become an outlay. The total carryover balance in a given account consists of two parts: Obligated balance: the amount of funds which have been obligated but for which payment (outlay) has not yet been made; these funds are obligated but unliquidated. Unobligated balance: the amount of funds still legally available for obligation that have not yet been obligated. The two components represent different phases of budget execution and present different opportunities for budgetary savings. For a detailed glossary of budget terms, see appendix V. For our first objective, to identify key questions to consider when evaluating balances, we identified common themes and factors that generally contribute to fluctuations in carryover balances.the list of questions from our analysis of government-wide guidance on budget preparation, budget execution, and internal controls. In addition, we met with federal budget specialists, agency officials responsible for managing the eight accounts selected as part of our second objective, and staff within the Office of Management and Budget (OMB) to obtain their views on the use of these questions to guide the examination of carryover balances. All agreed that the questions were reasonable. Our We developed prior work on federal budgeting, user fees, budget triggers, and managing under continuing resolutions also informed the development of the key questions. Our second objective was to describe how answering these key questions provides information about why carryover balances were reported in selected accounts during fiscal years 2007 through 2012, and how agencies estimated and managed these balances. To do this, we selected a nongeneralizable sample of eight accounts with multi-year or no-year funds to use as case illustrations. To start our selection process, we collected and compared end-of-year actual unexpired unobligated balances plus obligated balances for which payment had not yet been made. We queried this data from OMB’s MAX database for all accounts in the federal budget for fiscal years 2007 through 2012. We found the data to be sufficiently reliable for the purpose of our report. We identified the 100 accounts with the largest average unobligated balance during fiscal years 2007 through 2012.eight accounts representing a variety of characteristics, including the size of the average unobligated balance, budget function, type of account, agency, and whether the account was composed of mandatory funds, discretionary funds, or a combination of both. We reviewed the selected agencies’ estimates as they were reported in the President’s Budget Appendices for fiscal years 2007 through 2012. Table 1 lists the selected accounts. For the purpose of this report, we did not seek to identify specific savings related to the selected accounts. To assist congressional and agency-level decision makers as well as other reviewers in their evaluation of agencies’ carryover balances, we identified four overarching key questions to consider: What mission and goals is the account or program supporting? What are the sources and fiscal characteristics of the funding? What factors affect the size or composition of the carryover balance? How does the agency estimate and manage carryover balances? Each of the four key questions leads to a set of second-tier, more specific questions that can help frame the analysis of carryover balances. Answering each of the four overarching questions and their second-tier questions provides insight into why a carryover balance exists, what size balance is appropriate, and what opportunities, if any, for savings exist. In addition, it may provide opportunities for enhanced oversight of agencies’ management of federal funds and identify areas where the federal government can improve and maximize its use of resources. These questions can be applied when evaluating carryover balances at either the account or program level. A single account may support a single program or multiple programs. Conversely, a single program may be supported by multiple accounts. Further, the balance in a single year may reflect events particular to that year and so present an incomplete or possibly misleading view of what is happening in a given account or program over time. Therefore, a longer view is preferable. Since the many accounts that make up the federal budget vary greatly and individual accounts have their own unique characteristics, we do not suggest that the following set of questions represents all the questions that could be considered when evaluating carryover balances. As congressional committees, managers, and other reviewers apply these evaluative questions, additional questions may arise. In some cases, however, a few of the questions may provide sufficient information to understand the nature of the balance. The following sections discuss the questions in more detail. In addition, all of the questions are presented as an evaluative guide in appendix II. Understanding the mission activities, goals, and programs the account supports provides information about whether a program needs to maintain a balance to operate smoothly, what size balance is appropriate, and whether opportunities for savings exist. Such context can inform assessments of whether, and how, to reduce carryover balances and what effect a reduction would have on the agency’s ability to carry out its mission. For example, an account established to provide stability in the financial or housing market needs a balance to ensure the agency has sufficient resources to respond quickly to adverse events. Similarly, we have previously reported on agency intragovernmental revolving funds for which a balance is needed to maintain price stability for customer agencies or to ensure that the providing agency has funding available to complete services or work being performed for customer agencies that cross fiscal years. Examples of specific questions to understand how the mission affects balances include What are the primary mission and goals the account or program is intended to achieve? What is the nature and purpose of the balance and how does it support the mission and goals (e.g., counter economic crisis, sustain business-like activities, emergency funding needs)? Have there been changes to the program or mission that may affect balances (e.g., additional responsibilities, major reorganization)? What type of activity (e.g., grants, procurement, direct services) is the account used for to achieve the agency’s mission and goals? What are the implications of carryover balances? What programs are funded by the account and how much does each contribute, if at all, to the carryover balance (i.e., does a large portion of the balance result from any particular programs)? Is it a new account or program or have the program goals and objectives changed? How has this affected the agency’s ability to obligate funds? The sources and fiscal characteristics of the funding influence what opportunities may exist for budgetary savings. Discretionary and mandatory funds present different issues in changing the size of any balance. Since discretionary accounts are controlled by the annual appropriations process, the size of any carryover balance can also be controlled through that process. Mandatory funding is budget authority provided in laws other than appropriations acts, such as authorizing acts. In such cases, to change or reduce the size of carryover balances, changes need to be made to authorizing language affecting program or eligibility requirements, for example. Moreover, since mandatory funds are often provided in “such sums as may be necessary” to make payments to those eligible, reducing the balance in such an account may have no practical or economic benefit and instead could impose unnecessary administrative costs. In other words, the action or effort to rescind a portion of mandatory balances would be countered by the action or effort to restore those funds because individuals are entitled to payments. The time limits imposed upon the funds (period of availability for new obligation) are important to evaluating carryover balances. If an account receives multi-year or no-year appropriations, some accounts or programs should be expected to have carryover balances. Similarly, if an account receives multi-year supplemental appropriations late in the fiscal year or for long-term projects, there is an increased likelihood, and perhaps expectation, for a carryover balance in the account. Nevertheless, a growing carryover balance over a number of years may indicate challenges in executing a program as planned. Accounts that receive funding primarily from outside sources, such as fees, have their own unique considerations. For example, an account that is fully funded through user fees that are available without further Congressional action may retain an unobligated balance to mitigate cyclical changes in fee revenue, thereby increasing assurance that sufficient funds will be available to support related mission activities. We are separately reporting on agencies’ management of user fees and the identification and management of unobligated balances in fee-funded agencies. Examples of specific questions to understand how the sources and fiscal characteristics of the funding affect balances include Are the account or program funds mandatory, discretionary, or a combination of both? In accounts or programs that receive both mandatory and discretionary funds, what portion of each are unobligated each year? What is the expected period over which funds will be obligated and liquidated (i.e., multiple years versus single year)? If multiple years, what drives the timing of the obligation and liquidation of funds (e.g., contract award period, grant cycle)? Did the account receive supplemental appropriations? If so, when were the supplemental appropriations acts enacted? How much of the balance is attributed to supplemental appropriations? To what extent is the activity funded by fees or dedicated taxes versus general revenues? o Does the account or program have the authority to use the fees or taxes without additional congressional action? Some factors are within an agency’s control and some are not. The rate at which obligations are incurred and subsequently liquidated in a fiscal year—that is, the rate at which budget authority becomes outlays—can vary with the nature of the activity. Understanding what drives this “spendout rate” provides information on the size of the unobligated portion of the balance versus the obligated portion; this in turn provides insight into the composition of the carryover balance as a whole. For example, for an account that funds procurement or grant activities, if the timing of the procurement cycle or grant award cycle differs from the fiscal year, the account is likely to have a slower spendout rate. As a result, the account may have a larger share of unobligated funds reported at the end of the fiscal year, which are then quickly obligated early in the following fiscal year. Alternatively, it could have a large obligated portion of the balance, which is expended over time. External events beyond agencies’ control—such as natural disasters or economic crises—can dramatically affect carryover balances. For example, if an agency receives a sudden inflow of funds late in the fiscal year from an emergency supplemental appropriation, the size of the unobligated balance carried forward to the next fiscal year may be greater than otherwise estimated. Examples of specific questions to understand what affects the size and composition of balances include How has the account’s balance changed over time? How has the composition of the carryover balance (i.e., the unobligated and obligated portions) changed over time? o Has the agency’s rate of obligation significantly slowed or increased as a result of certain factors (e.g., application or review periods, regulatory issuance)? What portion of unobligated funds has the agency labeled as committed for specific uses? What conditions or external events have occurred outside the agency’s control (e.g., events that led to changes in program demand)? o Were there any changes to accounting, regulatory, or statutory requirements related to the account’s balance (e.g., bid protests, receipt of supplemental appropriations, continuing resolutions)? Are there questions about the agency’s ability or capacity to carry out the program due to previously reported weaknesses (e.g., excessive risks, poor performance, unmet objectives)? Are interim milestones being met to ensure projects are carried out on schedule? o Are there known production issues that could lead to delays? What opportunities exist to deobligate unliquidated balances (e.g., up-front obligations to contractors, grants to states)? Understanding an agency’s processes for estimating and managing carryover balances provides information to assess how effectively agencies anticipate program needs and ensure the most efficient use of resources. If an agency does not have a robust strategy in place to manage carryover balances, or is unable to adequately explain or support the reported carryover balance, then a more in-depth review is warranted. In those cases, balances may either fall too low to efficiently manage operations or rise to unnecessarily high levels, producing potential opportunities for those funds to be used more efficiently elsewhere. Examples of specific questions to understand how agency estimation and management practices affect balances include What assumptions or factors did the agency incorporate into its estimate of the account’s carryover balance (e.g., historical experience, demand models)? To what extent and how often does the agency revisit or adjust estimates of unobligated balances to reflect historical data or other information? Does the agency have a routine mechanism for reviewing its obligations and determining whether there are opportunities to deobligate funds (e.g., written procedures or ad hoc processes)? What is the agency’s timeline for obligating and expending funds in the account? o Does the agency or program tend to under-execute its budget? o What is the spendout rate after funds have been obligated? Has the agency followed its own procedures for ensuring fiscally responsible management of balances? To provide context and perspective in terms of an individual account or program, we selected a nongeneralizable sample of eight accounts and conducted further analysis of their carryover balances (see table 2). Our analysis is framed in the context of the four key overarching questions presented in the previous section. For the purpose of this report, we did not seek to identify specific savings related to the selected accounts. Detailed information on our analysis of each of the eight accounts is presented in appendix III. The accounts in our review contained carryover balances so that they could support the agency’s ability to carry out its mission. The eight accounts we reviewed received multi-year or no-year funds and supported a wide variety of missions, such as homeless assistance, aquatic ecosystem restoration, disaster recovery, financial market stability, and military readiness. Accordingly, the specific types of activities funded through the accounts varied as well, including construction projects, procurement, grants, and emergency preparedness. Over the fiscal year 2007 through 2012 period, each of the accounts had a carryover balance that was tied to the manner in which the agency went about fulfilling its related mission responsibilities. For example, the Mutual Mortgage Insurance Capital Reserve account maintains a balance for unexpected insurance claim expenses and was established to hold reserve funds to meet Federal Housing Administration’s (FHA) Mutual Mortgage Insurance Fund (MMI Fund) statutory capital reserve requirement. Under the MMI Fund, FHA insures a variety of mortgages for home purchases and refinancing to meet the housing needs of traditionally underserved borrowers. balance in the capital reserve account is composed entirely of unobligated funds and is affected by changes in the anticipated performance of FHA-insured mortgages. Consequently, the housing crisis that started in 2007 had a significant effect on the size of the account’s unobligated balance. During the housing crisis, the balance was drawn down as more pessimistic forecasts of economic conditions—house prices, in particular—resulted in higher projected insurance claims. In fiscal year 2007, the unobligated balance was approximately $21 billion, but the balance had decreased to $3.3 billion by fiscal year 2012. FHA also insures reverse mortgages that permit persons 62 and older to convert their home equity into cash advances. that the Department of the Treasury (Treasury) has the authority to transfer sums as needed to the GSEs to ensure their positive net worth; that is, when the two GSEs’ liabilities exceed assets. Such payments are made on a quarterly basis if needed and decrease the account balance accordingly. Created in 2008, the account’s unobligated balance peaked in 2009 at $304 billion, after Congress authorized an increase to the allowable funding commitment Treasury could provide to the GSEs. Subsequently, the balance gradually declined through 2012. The sources of funding and their fiscal characteristics across our sample of accounts varied. Three accounts were made up of solely mandatory amounts, four consisted solely of discretionary funds, and one account was split: it had a combination of both mandatory and discretionary funds. Accounts in our review that received discretionary funds were funded by annual appropriations and supplemental appropriations. The period of availability of funds included multi-year and no-year money provided through annual or supplemental appropriations, or both. The sources and fiscal characteristics of funding for an account or program affect the timing of when funds are obligated and disbursed, and provide insight into why amounts may be carried forward from one fiscal year to the next. For example, in some cases where an account received supplemental appropriations, the level of carryover funds was significantly affected by the timing of that supplemental appropriation. For example, the Community Development Fund (CDF) account, which primarily supports the Community Development Block Grant (CDBG) program, received a large supplemental appropriation on September 30, 2008, to provide disaster relief to areas affected by hurricanes, floods, and other natural disasters. The receipt of supplemental funds at the end of the fiscal year caused a significant spike in the account’s carryover balance for that year. Additionally, the timing of final appropriations decisions had a significant effect on the size of unobligated balances in certain accounts. When continuing resolutions (CR) are enacted in lieu of annual appropriations bills, it creates uncertainty about both the timing and level of funding that ultimately will be available. For example, agencies are directed to operate at a conservative rate of spending while CRs are in effect, thus compressing the time period to obligate funds once final appropriations decisions are made. We have previously reported that this also limits an agency’s management options. In addition to affecting the rate of obligations, agencies may be uncertain how Congress will ultimately choose to direct funds until final appropriations decisions are made. For example, the U.S. Army Corps of Engineers develops and presents construction plans for specific projects as part of its annual budget request. Corps officials reported that in some cases, committee reports accompanying appropriations acts included language directing the Corps to carry out additional construction projects than those which already had work plans developed. Consequently, Corps officials said obligating the funds to support these additional priorities was delayed while they developed the necessary work plans. For each of the accounts in our review, the way an agency or program operates affected the size and composition of carryover balances. During fiscal years 2007 through 2012, the size of the carryover balances in each of the eight accounts we reviewed ranged from a low of $2.5 billion in the U.S. Army Corps of Engineers-Civil Works Construction account to $304 billion in Treasury’s GSE Preferred Stock Purchase Agreements account. Depending on the needs of the program and the way the agency managed the funding, the portion of the carryover balance attributed to unobligated versus obligated also varied across the eight accounts. For example, the GSE purchase agreement account, which was created in 2008, had a large unobligated balance and no obligated balance. The account made direct quarterly payments to Fannie Mae and Freddie Mac as needed. The spendout rate was very quick—that is, funds were disbursed soon after obligation—which resulted in the agency reporting no obligated balance in the account by year end. Accordingly, the remaining unobligated balance represents the cash balance in the account. In contrast, the carryover balance in Army’s Aircraft Procurement account was largely made up of unliquidated obligations. This was due, in part, to the nature of the procurement cycle, which results in a slower spendout rate. As the agency entered into a contract, funds were obligated up front to reflect the government’s liability. However, funds were disbursed only as work was completed according to the terms of the contract. In the case of the grant accounts we reviewed, the size of the carryover balance depended on the particular grant cycle. For example, the carryover balance in the Department of Housing and Urban Development’s (HUD) Homeless Assistance Grants account contained a relatively smaller portion of unobligated funds compared to obligated funds. The unobligated balance resulted from the timing of the grant award process, which is done on a calendar year rather than fiscal year basis. Consequently, while a portion of funds are unobligated at the end of the fiscal year, the agency plans to award grants at a later date during the current or next calendar year, likely during a new fiscal year, at which point the available multi-year funds will be obligated. The size of the obligated but unliquidated balance depends on the rate at which the grantees draw down, or spend, the funds. Some accounts experienced a dramatic spike in their balances while others remained relatively steady. For example, when the Exchange Stabilization Fund (ESF) received almost $50 billion in allocations from the International Monetary Fund (IMF) over a one-month period late in fiscal year 2009, the unobligated balance spiked. Treasury officials said the new allocations were an important element of the response to the global economic crisis. In addition, beginning in fiscal year 2010, the size of the obligated portion of the balance in the account grew dramatically after Treasury implemented a budgetary reporting change that resulted in significant adjustments to ESF account balances for fiscal years 2010 through 2012. The methods by which agencies in our sample estimated future carryover balances depended, in part, on whether the account was mandatory or discretionary. In the case of mandatory accounts, agencies focused on future needs of the account and relied on economic indicators as well as historical trends to estimate future balances of the account, including carryover balances. For example, budget-year estimates of carryover balances in the GSE Preferred Stock Purchase Agreements account are derived from the projected draws—payments to the two GSEs—from year to year. To do this, Treasury annually prepares a series of long-range forecasts to determine the estimated amount of contingent liability to the GSEs under the purchase agreements. Based on the size of the projected payments, Treasury estimates the amount of budget authority that will be carried over into the next fiscal year. Similarly, estimated future carryover balances for the Exchange Stabilization Fund (ESF) represent projected net interest earnings on investments. The estimates are subject to considerable variation, depending on changes in the amount and composition of assets and the interest rates applied to investments. Among the eight accounts we reviewed, the discretionary accounts used historical data combined with current variables to estimate carryover balances. For example, when estimating the carryover amounts in HUD’s Homeless Assistance Grants (HAG) account, officials said they look at the composition of the balance over recent years, including factors such as historical rates of obligations and outlays. Additionally, officials estimate balances based on experience of which programs are likely to request renewals. Estimating the use of the program based on historical trends helps officials estimate both the unobligated and obligated portion of carryover balances. Given the fiscal pressures facing the nation, examination of carryover balances provides an opportunity for enhanced oversight of agencies’ management of federal funds. It also may help identify areas where the federal government can improve and maximize the use of resources. The increase in the size of total end-of-year carryover balances across the federal government between fiscal years 2007 through 2012 may raise questions. Is the balance in an account or program appropriate? Does an account or program with a significant or growing balance need these funds? Do the balances indicate there are opportunities to reduce or rescind balances and direct those resources towards other programs or priorities? There is no single answer to these questions—they depend on the characteristics of the specific account or program in question. The answer requires an analysis of the mission, funding, composition, and agency management of carryover balances. Understanding the mission activities and goals supported by the account or program in question enhances the ability of decision makers and reviewers to see the trade-offs or risks involved in deciding to reduce a carryover balance and the consequential effect on the agency’s ability to achieve its mission. Understanding the details of the fiscal characteristics and source of funding for the account or program can highlight the boundaries where real savings opportunities exist versus those that would have no real economic benefit. Careful examination can also provide information about changes in factors affecting budget execution in the relevant area. Not all factors affecting the size and composition of balances are within an agency’s control. Nevertheless, taking into consideration the agency’s processes for estimating carryover balances and its ability to effectively manage balances throughout the year can draw attention to good practices as well as areas where execution could be improved. Given the complex characteristics of the various accounts and programs that constitute the federal budget, the most effective way to identify real opportunities for savings from carryover balances is to analyze them on an account-by- account or program-by-program basis. We provided a draft of this report to the Secretaries of Defense, Health and Human Services, Housing and Urban Development, and the Treasury for review and comment. All four departments generally agreed with our findings and provided technical comments that were incorporated as appropriate. HHS provided comments that are discussed below and reprinted in appendix IV. The comments submitted by the HHS Assistant Secretary for Legislation stressed the need to recognize that our audit focused on a select number of federal accounts, which may or may not be representative of all government accounts with carryover balances. Furthermore, HHS said that any conclusions drawn from the report may not apply across the board to all accounts with multi-year or no-year funds. As stated in multiple sections of our report, including the objectives, scope, and methodology section, we selected a nongeneralizable sample of eight accounts with multi-year or no-year funds to use as case illustrations to answer the four key overarching questions we identified in this report. Our draft report called attention to the varied and unique characteristics among the many accounts that make up the federal budget and the need to analyze balances on an account-by-account or program-by-program basis. Furthermore, we state that the list of evaluative questions we identified is not intended to represent all of the questions that could be considered when evaluating carryover balances. HHS emphasized that the Public Health and Social Services Emergency Fund has complexities and authorities that may differ from other federal accounts. The agency stated that looking at trends over time in an account may not be the correct approach to overcome these complexities when trying to determine spend rates or reasons for fluctuations in spend rates. As noted above, multiple sections of the report cite the variety and uniqueness of the many accounts in the federal budget. Furthermore, as stated in the report, we deliberately chose a selection of accounts representing a variety of characteristics to give a sense of the range of issues that may be involved when evaluating carryover balances. We also emphasize that reviewing the carryover balance in an account for just a single year may reflect events particular to that year and so present an incomplete or possibly misleading view of what is happening in a given account or program. We still consider a longer view—that is, reviewing balances over multiple years—to be the more preferable approach. We are sending copies of this report to the Secretaries of Defense, Health and Human Services, Housing and Urban Development, and the Treasury. We are also sending copies of this report to relevant congressional committees. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Susan J. Irving at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who have made contributions to this report are listed in appendix VI. The size of the overall carryover balances in the federal budget increased significantly from 2007 through 2012 with the unobligated portion representing approximately one-third of that total balance throughout the period. Specifically, the sum of total carryover balances across the federal government was $1.5 trillion in fiscal year 2007, of which $471 billion— approximately 30 percent—was unobligated. At the end of fiscal year 2012, the sum of total carryover balances across all accounts was $2.2 trillion, of which $788 billion—approximately 35 percent—was unobligated. The number of accounts with carryover balances also increased from fiscal years 2007 through 2012 with the majority of them reporting unobligated balances each year. Specifically, the number of accounts that reported carryover balances in fiscal year 2007 was 1,178, of which 895 reported unobligated balances, or approximately 76 percent. At the end of fiscal year 2012, the number of accounts that reported carryover balances was 1,274, of which nearly 1,000 reported unobligated balances, or Included in these numbers are some newly approximately 78 percent. established accounts that were created by significant legislation during the federal government’s response to the economic downturn and the housing crisis, such as the Housing and Economic Recovery Act of 2008, the Emergency Economic Stabilization Act of 2008,American Recovery and Reinvestment Act. The nature of the funds—whether they are mandatory, discretionary, or split—influences what opportunities may exist for budgetary savings. For example, if the funds are primarily mandatory, programmatic changes are often necessary to change or reduce the size of carryover balances. Focusing only on the unobligated portion of carryover balances, figure 1 shows that mandatory accounts represented an increasing share of the federal government’s unobligated balances. Specifically, in fiscal year 2007, 23 percent of the total unobligated funds in federal accounts were attributed to mandatory accounts. By the end of fiscal year 2012, almost 48 percent of unobligated balances resided in mandatory accounts. The 10 largest unobligated balances reported at the end of fiscal year 2012 were in accounts with mandatory or split funding (see table 3). The combined balances in those 10 accounts represented 58 percent of the total unobligated balances government-wide that year. As shown in figures that follow, during fiscal years 2007 through 2012, unobligated balances were concentrated in mandatory accounts and centered on three national priorities and eight agencies. The GSE Preferred Stock Purchase Agreements account, which was created in 2008, held by far the largest unobligated balance and affected the composition of balances across the federal government accordingly. The three largest mandatory accounts on average were Treasury’s GSE Preferred Stock Purchase Agreement account, Exchange Stabilization Fund, and Troubled Asset Relief Program (TARP) Housing Programs. Split accounts with both mandatory and discretionary funds made up roughly half of the unobligated balances government-wide in fiscal year 2007, but decreased to one-third by fiscal year 2012. The three largest split accounts on average were the Office of Personnel Management’s Employee Life Insurance Fund, Federal Deposit Insurance Corporation’s (FDIC) Deposit Insurance Department of Transportation’s (DOT) Federal-aid Highways account. Discretionary accounts made up the smallest portion of unobligated balances across the federal budget. The three largest discretionary accounts on average were the Department of Education’s State Fiscal Stabilization Fund established under the American Recovery and Reinvestment Act, Department of Defense’s (DOD) Navy Shipbuilding and Conversion DOT’s Capital Assistance for High Speed Rail Corridors and Intercity Rail Service, established under the American Recovery and Reinvestment Act. Each federal account is assigned to a budget function, which identifies the national priority supported by the account. As shown in figure 3, the majority of unobligated balances were attributed to three national priorities: commerce and housing credit (function 370), national defense (function 050), and international affairs (function 150). In general throughout this period, commerce and housing credit accounted for the largest share of unobligated balances government-wide, growing from 21 percent in fiscal year 2007 to 37 percent at the end of fiscal year 2012. Three accounts contributed significantly to the share of unobligated balances attributed to commerce and housing credit: the GSE Preferred Stock Purchase Agreements account (newly created in fiscal year 2008), the Deposit Insurance Fund account, and the MMI Capital Reserve account. The economic downturn and housing crisis and the government’s response to it was also a major determinant of which agencies accounted for the majority of unobligated balances in fiscal years 2007 through 2012. As shown in figure 4, the share of unobligated balances held by Treasury, which manages the GSE Preferred Stock Purchase Agreement account, grew from 8 percent in fiscal year 2007 to 33 percent in fiscal year 2012. For most of the period, Treasury held the largest share of unobligated balances. DOD held the second largest share of unobligated balances over the period. Together these two (plus six other agencies that accounted for considerably smaller shares) represented approximately 80 percent of unobligated balances throughout the period. If the GSE Preferred Stock Purchase Agreement account was excluded from the breakdown shown in figure 4, DOD would represent the largest share of unobligated balances during the period. We developed four key overarching questions to consider in evaluating both obligated and unobligated balances. Each of these leads to a set of second-tier, more specific questions to help frame the analysis of carryover balances. We developed this list of questions from our analysis of government-wide guidance on budget preparation, budget execution, and internal controls. In addition, we met with federal budget specialists, agency officials responsible for managing the eight accounts selected as part of our review, and staff within the Office of Management and Budget (OMB) to obtain their views on the use of these questions to guide the examination of carryover balances. Our prior work on federal budgeting, user fees, budget triggers, and managing under continuing resolutions also informed the development of these key questions. Answering each of the four overarching questions provides insight into why the balance exists, what size balance is appropriate, and what opportunities, if any, for savings exist. In addition, it may provide opportunities for enhanced oversight of agencies’ management of federal funds and identify areas where the federal government can improve and maximize its use of resources. These questions can be applied when evaluating carryover balances at either the account or program level. A single account may support a single program or multiple programs. Conversely a single program may be supported by multiple accounts. Further, the balance in a single year may reflect events particular to that year and so present an incomplete or possibly misleading view of what is happening in a given account or program. Therefore, a longer view is preferable. Since the many accounts that make up the federal budget vary greatly and individual accounts have their own unique characteristics, we do not suggest that the following set of questions represents all the questions that could be considered when evaluating carryover balances. As congressional committees, managers, and other reviewers apply these evaluative questions, additional questions may arise. In some cases, however, a few of the questions may provide sufficient information to understand the nature of the balance. Key question What mission and goals is the account or program supporting? Understanding the mission activities, goals, and programs the account supports provides insight into whether a program needs to maintain a balance to operate smoothly, what size balance is appropriate, and whether opportunities for savings exist. Such context can inform assessments of whether and how to reduce carryover balances, and what effect this would have on the agency’s ability to carry out its mission. For example, an account established to provide stability in the financial or housing market needs a balance to ensure the agency has sufficient resources to respond quickly to adverse events. Subset of questions What are the primary mission and goals the account or program is intended to achieve? What is the nature and purpose of the balance and how does it support the mission and goals (e.g., counter economic crises, sustain business-like activities, meet emergency funding needs)? Have there been changes to the program or mission that may affect balances (e.g., additional responsibilities, major reorganization)? What type of activity (e.g., grants, procurement, direct services) is the account used for to achieve the agency’s mission and goals? What are the implications of carryover balances? What programs are funded by the account and how much does each contribute, if at all, to the carryover balance (e.g., does a large portion of the balance result from any particular programs)? Is it a new account or program or have the program goals and objectives changed? How has this affected the ability to obligate funds? What are the sources and fiscal characteristics of the funding? The sources and fiscal characteristics of the funds influence what opportunities may exist for savings. If funds are primarily mandatory, opportunities are more limited as programmatic changes would generally be required to reduce balances. In such cases, reducing an account’s balance may have no economic benefit and instead could impose unnecessary administrative costs. If funds are discretionary, balances can be controlled by the annual appropriations process. Some accounts or programs may be expected to have carryover balances if the agency has multi-year or no-year funds. Supplemental appropriations enacted late in the fiscal year (or for long-term projects) may also increase the likelihood, and perhaps expectation, for a carryover balance. However, a growing carryover balance over a number of years may indicate problems executing a program as planned. Accounts that receive funding primarily from outside sources, such as fees, have their own unique considerations. Are the account or program funds mandatory, discretionary or a combination of both? In accounts or programs that receive both mandatory and discretionary funds, what portion of each are unobligated each year? For unobligated funds, what is the expected period over which funds will be obligated and liquidated (i.e., multiple years versus single year)? If over multiple years, what drives the timing of the obligation and liquidation of funds (e.g., contract award period, grant cycle)? Did the account receive supplemental appropriations? If so, when were the supplemental appropriations acts enacted? How much of the balance is attributed to supplemental appropriations? To what extent is the activity funded by fees or taxes (e.g., fuel taxes) versus general revenues? Does the account or program have the authority to use the fees or taxes without additional congressional action? Key question What factors affect the size or composition of the carryover balance? It is important to consider which factors were within the agency’s control and which were not. Agencies or programs may operate in a way that contributes to a faster or slower spendout rate for a particular account. Understanding the spendout rate can help to identify what drives the size of the unobligated and obligated portions of the balance. In addition, external events beyond agencies’ control—such as natural disasters or economic crises—can dramatically affect carryover balances. For example, if an agency receives a sudden inflow of funds late in the fiscal year from a supplemental appropriation, the size of the unobligated balance carried forward to the next fiscal year may be greater than otherwise estimated. Subset of questions How has the account’s balance changed over time? How has the composition of the carryover balance (i.e., the unobligated and obligated portions) changed over time? Has the agency’s rate of obligation significantly slowed or increased as a result of certain factors (e.g., application or review periods, regulatory issuance)? What portion of unobligated funds has the agency labeled as committed for specific uses? What conditions or external events have occurred outside the agency’s control (e.g., events that led to changes in program demand)? Were there any changes to accounting, regulatory, or statutory requirements related to the account’s balance (e.g., bid protests, receipt of supplemental appropriations, continuing resolutions)? Are there questions about the agency’s ability or capacity to carry out the program due to previously reported weaknesses (e.g., excessive risks, poor performance, unmet objectives)? Are interim milestones being met to ensure projects are carried out on schedule? Are there known production issues that could lead to delays? What opportunities exist to deobligate unliquidated balances (e.g., up-front obligations to contractors, grants to states)? How does the agency estimate and manage carryover balances? Understanding an agency’s processes for estimating and managing carryover balances provides information to assess how effective agencies are in anticipating program needs and ensuring the most efficient use of resources. If an agency does not have a robust strategy in place to manage carryover balances or is unable to adequately explain or support the reported carryover balance, then a more in- depth review is warranted with the potential to identify opportunities for budgetary savings. What assumptions or factors did the agency incorporate into its estimate of the account’s carryover balance (e.g., historical experience, demand models)? To what extent and how often does the agency revisit or adjust estimates of unobligated balances to reflect historical data or other information? Does the agency have a routine mechanism for reviewing its obligations and determining whether there are opportunities to deobligate funds (e.g., written procedures or ad hoc processes)? What is the agency’s timeline for obligating and expending funds in the account? Does the agency or program tend to under-execute its budget? What is the spendout rate after funds have been obligated? Has the agency followed its own procedures for ensuring fiscally responsible management of balances? As part of our review, we selected a nongeneralizable sample of eight accounts and analyzed their end-of-year carryover balances for fiscal years 2007 through 2012. These eight accounts serve as case illustrations to better understand why carryover balances existed and the types of factors that affected the size of the balances. We selected accounts based on several characteristics including the size of the average unobligated balance, budget function, type of account, agency, and whether the account was composed of mandatory or discretionary funds. Our analysis is presented in terms of the four overarching key questions we developed in our evaluative guide outlined in appendix II. Because of the variation in the size of the accounts, the graphics depicted in each case illustration are scaled to the size of the individual account; as a result, the graphics cannot be directly compared to one another. Further, the second figure in each case illustration includes both actual unobligated balances as well as two sets of estimates: budget year and current year. The budget year (BY) is a term used in the budget formulation process to refer to the fiscal year for which the budget is being considered. For example, estimates for the fiscal year 2012 budget year were reported during fiscal year 2011. Current year (CY) is a term used in the budget formulation process to refer to the fiscal year immediately preceding the budget year under consideration; such that for fiscal year 2012, current year estimates were reported part way through 2012. The eight accounts we reviewed as case illustrations were: Department of Defense: Army Aircraft Procurement Department of Defense: U.S. Army Corps of Engineers-Civil Works, Department of Health and Human Services: Public Health and Social Department of Housing and Urban Development: Community Department of Housing and Urban Development: Federal Housing Administration-Mutual Mortgage Insurance Capital Reserve Department of Housing and Urban Development: Homeless Department of the Treasury: Exchange Stabilization Fund Department of the Treasury: Government Sponsored Enterprise (GSE) Preferred Stock Purchase Agreements Account Name: Agency: National Priority: Department of Defense-Military Aircraft Procurement, Army (APA) Department of Defense (Budget Subfunction 051) What Mission and Goals Is the Account or Program Supporting? The Army uses the Aircraft Procurement account for construction, procurement, production, modification, and modernization of aircraft equipment. The Army purchases and maintains aircraft in order to support activities such as combat missions and operations. For example, the Army has a contract to develop an unmanned aircraft system, which will be able to carry four aircraft as well as other necessary equipment. What Are the Sources and Fiscal Characteristics of the Funding? APA receives annual and supplemental appropriations that are available for obligation for three years. All funds in the account are discretionary. What Factors Affect the Size or Composition of the Carryover Balance? As shown in figure 5, carryover balances increased steadily from approximately $7 billion in fiscal year 2007 to $13 billion in fiscal year 2011 before then decreasing slightly at the end of fiscal year 2012. The majority of the carryover balance was composed of unliquidated obligations. According to DOD officials, the majority of the carryover balance is obligated but unliquidated due to the nature of the procurement process for aircraft, which can take many years. Once the Army identifies a requirement and receives appropriations to fund that requirement, it issues a request for proposals to develop a product to meet the requirement. After the request for proposal is issued, awarding a contract generally takes three to nine months. Once a contract is in place and funds are obligated, the agency periodically disburses funds to the contractor as milestones are reached. The majority of the payment is made upon delivery. DOD officials said a portion of unobligated funds are committed (reserved) for specific projects. However, balances that are considered “committed” are not officially obligated until a contract is awarded. For example, in 2011 the subaccount dedicated to modifications of aircraft had an unobligated balance of about $420,000, of which approximately $355,000, or about 85 percent, was already “committed” to specific projects. In 2009, several events contributed to a delay in the contracting cycle that resulted in higher unobligated balances for 2010 and 2011. A 2009 executive policy memo directed DOD to issue new guidance regarding sole-source contracting. In anticipation of the guidance, APA officials chose to wait to award any further contracts. Additionally that year, we identified issues with the Defense Contract Audit Agency, including with the independence of their auditors. As a result of these two factors, APA reviewed and revised its processes, which delayed the contracting timeline that year from approximately 3 to 9 months to 9 to 18 months. How Does the Agency Estimate and Manage Carryover Balances? DOD estimates carryover balances for the Army Aircraft Procurement account using a formula based on historical data. The agency first determines the average obligation rate over the previous five years. Using this weighted average, officials then apply that rate against current obligated balances in the account to estimate the amount of unobligated funds and unliquidated obligations that will carry forward to the next fiscal year. As shown in figure 6, actual unobligated balances have steadily risen, while estimates have remained fairly steady. Further, actual unobligated balances were higher than estimates for all years, with the greatest differences occurring in 2010 through 2012. This suggests that the agency’s model for estimating unobligated balances was fairly accurate for 2007 and 2008, and generally underestimated unobligated balances for years 2009 through 2012. Account Name: Agency: National Priority: Water Resources (Budget Subfunction 301) What Mission and Goals Is the Account or Program Supporting? The U.S. Army Corps of Engineers’ (Corps) Construction account funds three main mission areas: flood and storm damage reduction, commercial navigation, and aquatic ecosystem restoration within the United States. The Corps engages in projects in waterways throughout the nation, such as in Ohio, Pennsylvania, Illinois, and Kentucky. While some projects are short term, others are long term and may involve a series of steps, such as rehabilitating a dam over time. The account allocates funding by project based on Congressional direction generally provided in committee reports accompanying the Corps’ appropriations, as well as various performance-based guidelines, including projects with very high economic and environmental returns and those close to completion. What Are the Sources and Fiscal Characteristics of the Funding? The Construction account receives funding through both annual and supplemental appropriations. Annual appropriations are allocated by projects. All funds in the account are discretionary and are typically available until expended. In addition, the account receives transfers from other sources including the Inland Waterways Trust Fund and the Harbor Maintenance Trust Fund. The account also receives funds from other federal agencies including the Department of Homeland Security and the Department of Veterans’ Affairs for specific construction projects. What Factors Affect the Size or Composition of the Carryover Balance? As shown in figure 7, during fiscal years 2007 through 2012 carryover balances peaked at nearly $10 billion in fiscal year 2009 before gradually decreasing to $5 billion in fiscal year 2012. The majority of the carryover balance was unobligated funds. A portion of the carryover balance is a result of no-year supplemental appropriations. Corps officials said that, in some cases, supplemental funds were not intended to be obligated immediately because they fund long-term projects. For example, between fiscal years 2006 and 2009, the Construction account received approximately $5 billion to aid in disaster recovery following Hurricane Katrina, a portion of which had still not been expended in 2012. In fiscal years 2009 through 2012, the size of the obligated portion of the balance was larger than in previous years as the Corps had the opportunity to obligate funds from supplemental appropriations it received. Further, officials cited delays in final appropriations execution decisions as a contributing factor to increased unobligated balances at the end of each fiscal year. The Corps develops and presents construction plans for specific projects as part of its annual budget request. In some cases, committee reports accompanying appropriations acts included language directing the Corps to carry out additional construction work beyond that for which the agency had already developed work plans. Consequently, Corps officials said obligating the funds to support this additional work was delayed until they developed the necessary work plans. How Does the Agency Estimate and Manage Carryover Balances? Corps officials said that carryover on individual projects is influenced by factors such as real estate and environmental considerations, including whether a project will affect local water quality. Officials also use ten-year historical averages of the account’s obligations and outlays to identify possible trends that may affect balances going forward. To manage carryover balances, officials said they adjust work plans consistent with Congressional direction. As shown in figure 8, the actual unobligated balance was larger than what the agency initially estimated each year during fiscal years 2007 through 2012. While in some cases, current year estimates were closer to actuals, the graph of estimated balances below suggests that the agency’s estimation process generally underestimated the amount of unobligated funds that would be carried over each year. Agency officials said that in some cases, this was due to supplemental appropriations that were received after budget estimates were reported. Public Health and Social Services Emergency Fund (PHSSEF) Department of Health and Human Services Agency: National Priority: Health Care Services (Budget Subfunction 551) What Mission and Goals Is the Account or Program Supporting? The Department of Health and Human Services’ (HHS) Public Health and Social Services Emergency Fund (PHSSEF) provides resources to support a comprehensive program to prepare for the public health and medical consequences of bioterrorism or other public health emergencies. Funds in this account support several offices and programs tasked with emergency preparedness including the Office of the Assistant Secretary for Preparedness and Response (ASPR), which receives the largest share of appropriations in the account. Within ASPR, the largest programs are the Hospital Preparedness Program and the Biomedical Advanced Research and Development Authority. Other programs and offices that PHSSEF supports include HHS’ Cybersecurity program, the Medical Reserve Corps, and Pandemic Influenza. In addition, the account includes the balance of the Special Reserve Fund, which supports the procurement of biodefense countermeasures as part of the Project BioShield Act. What Are the Sources and Fiscal Characteristics of the Funding? The account receives annual appropriations, supplemental appropriations, and transfers from other accounts outside of HHS. Annual appropriations primarily support ASPR and consist of a mix of both one- year and multi-year funds. Supplemental appropriations were provided for preparedness and response to pandemic influenza in 2009 and emergency relief and reconstruction aid related to the earthquake in Haiti in 2010. Those funds are available until expended (no-year funds). In addition, the account received a transfer of funds from the Department of Homeland Security for Project BioShield activities and reimbursements from the Federal Emergency Management Agency. All funds in the account are discretionary. What Factors Affect the Size or Composition of Carryover Balances? As shown in figure 9, the actual carryover balance dramatically increased in fiscal year 2009, to approximately $11 billion, and subsequently declined through 2012. In most years, the obligated portion of the balance was greater than the unobligated portion. However, in fiscal year 2010, with the receipt of supplemental appropriations, the unobligated balance was almost twice as large as the obligated portion of the balance. Agency officials said that carryover balances in the account were primarily the result of no-year supplemental appropriations. For example, in fiscal years 2009 and 2010, balances reflect supplemental appropriations to address the H1N1 pandemic influenza outbreak and emergency relief after the 2010 earthquake in Haiti. In addition, funds transferred from DHS for Project BioShield activities contributed to the carryover balance. In addition, some of the multiple programs that PHSSEF supports have slower spendout rates than others. For example, agency officials explained that funds supporting Project BioShield activities were available for obligation at intervals over a ten-year period during fiscal years 2004 to 2013. They said the intent of the funding method was to provide procurement funds with availability over a long period of time to encourage industry to engage in the advanced development of countermeasures. As a result, some portion of unobligated funds would be carried forward each year. Once HHS has entered into a contract for the procurement of a specific countermeasure, agency officials said it may take years to progress through contractual milestones, thereby contributing to the size of the obligated portion of the carryover balance in any given year. In contrast, agency officials said the spendout rate of PHSSEF funds supporting National Special Security Events (NSSE) is relatively shorter. Planned events which require HHS support include the Presidential Inauguration and July 4th celebrations on the National Mall. An example of an unplanned event is when HHS used PHSSEF funds to provide mental health response teams to Sandy Hook Elementary students, families, and staff after the school shooting in December 2012. HHS officials said the obligation of these funds for NSSE purposes is somewhat sporadic, but once obligated, funds are quickly disbursed. How Does the Agency Estimate and Manage Carryover Balances? Agency officials said the Office of Budget works with program and budget offices to obtain information to incorporate into budget year estimates of carryover balances. These balances are informed by prior year actuals and take into account any planned procurement, other contract actions, and grant awards. As shown in figure 10, during fiscal years 2007 through 2012, actual unobligated balances in the account were higher than budget-year estimates in most years with the largest differences occurring in fiscal years 2009 and 2010. Agency officials said emergency supplemental appropriations contributed to higher than expected unobligated balances in 2009 and 2010. Specifically, supplemental appropriations were provided to address the influenza pandemic in 2009 and to respond to the earthquake in Haiti in 2010. Both events were unexpected and accordingly, were not included in the agency’s initial estimates of the unobligated balance. In addition, HHS officials said obligations slowed in fiscal year 2010 when the influenza pandemic proved to be less severe than originally anticipated. In both 2011 and 2012 unobligated balances were lower than budget-year estimates. Community Development Fund (CDF) Department of Housing and Urban Development (Budget Subunction 451) What Mission and Goals Is the Account or Program Supporting? The Community Development Fund (CDF) primarily supports programs targeted for low to moderate income individuals and families that include activities such as job creation. The Department of Housing and Urban Development (HUD) uses the funds in this account to distribute grant assistance through several programs, the largest of which is the Community Development Block Grant (CDBG). CDBG provides formula grants to units of general local government and states to support activities such as public infrastructure improvements, housing, rehabilitation, and public service activities including child care. CDF also supports other programs such as Section 108 loan guarantees, the Neighborhood Stabilization Program (NSP) and disaster recovery programs. CDF has received large supplemental appropriations for various disaster recovery efforts such as those following Hurricane Katrina and Hurricane Wilma. What Are the Sources and Fiscal Characteristics of the Funding? CDF receives annual CDBG appropriations for grants to state and local governments which are available for obligation for three years. HUD officials said that CDBG funds are distributed through formulas, with approximately 70 percent allocated to entitlement communities and 30 percent to non-entitlement communities. According to HUD, entitlement communities are defined by statute, as cities with populations greater than 50,000 people or urban counties with populations of 200,000 or more. CDF also receives supplemental appropriations for its disaster recovery programs, which have varying periods of availability depending on the legislation. In addition, HUD officials said funds supporting NSP1 and NSP3 are mandatory and all CDBG funds are discretionary. What Factors Affect the Size or Composition of the Carryover Balance? During fiscal years 2007 through 2012, the size of the carryover balance in the CDF account ranged from approximately $16 billion to about $29 billion. As shown in figure 11, the obligated but unexpended portion of the balance remained fairly steady over the six year period. The unobligated portion peaked in 2008 and subsequently decreased for the remainder of the period. Supplemental appropriations contributed to larger than anticipated unobligated balances in fiscal years 2008 and 2009. Although officials said they make an effort to obligate supplemental appropriations quickly, if funds are received late in the fiscal year, it is more likely that a portion of the unobligated funds will carry over into the next fiscal year. For example, in 2008, HUD received a supplemental appropriation on September 30th to provide disaster relief to areas affected by hurricanes, floods, and other natural disasters. The timing of the appropriation led to a large increase in the unobligated balance carried over into the next fiscal year. In 2009, additional supplemental appropriations were enacted. HUD officials said they typically obligate funds quickly; however, the rate of expenditure may be slow, resulting in a large obligated, unexpended balance from year to year. This is because once funds are obligated, the time it takes to expend funds depends on the rate at which grantees are drawing down their grant money. In the case of supplemental funds, the increased amount of grants may be a challenge for grantees to disburse. For example, HUD officials said that in one instance, a grantee that typically received a couple hundred thousand dollars per year received approximately $18 million in grant funds. Such a dramatic increase in the amount of grant funding posed administrative challenges for the grantee, which contributed to a slower spendout rate. How Does the Agency Estimate and Manage Carryover Balances? Estimates of carryover balances are determined through a review of prior year balances in the account. For example, when estimating the portion of the balance attributed to the traditional CDBG program, officials said they look at the composition of carryover balances from recent years, including whether the balances are composed of funds in their first year, second year, or third year of availability. They also consider historical rates of obligations and outlays. For the portion of the balance attributed to disaster recovery grants, it is more difficult to estimate carryover because estimating the timing and size of supplemental appropriations for disaster recovery are beyond the agency’s control. As shown in figure 12, actual unobligated balances were larger than estimated during fiscal years 2007 through 2012, with the largest difference occurring in 2008. Although the budget-year estimate of the unobligated balance for fiscal year 2007 was fairly close to the actual, the agency generally underestimated the amount of unobligated funds that would be carried over each year. For example, in 2008 and 2009, as discussed above, the agency unexpectedly received supplemental appropriations that caused the actual balance to be considerably higher than the previous estimates. Federal Housing Administration Mutual Mortgage Insurance (MMI) Capital Reserve Account Department of Housing and Urban Development Agency: National Priority: Mortgage Credit (Budget Subfunction 371) What Mission and Goals Is the Account or Program Supporting? The MMI Capital Reserve account supports the Federal Housing Administration’s (FHA) Mutual Mortgage Insurance Fund (MMI Fund). Under the MMI Fund, FHA insures a variety of mortgages for home purchases and refinancing to meet the housing needs of traditionally underserved borrowers. FHA has played a prominent role in the single- family mortgage market and accounted for more than 25 percent of the home purchase mortgages originated in fiscal year 2012. Like other federal credit agencies, FHA estimates and re-estimates the net lifetime costs—known as credit subsidy costs—of the mortgages it insures. When the present value of estimated cash inflows (such as borrower insurance premiums) exceed the present value of expected cash outflows (such as insurance claims), negative subsidies are generated. These negative subsidies are held in the MMI Capital Reserve account as an unobligated balance. When FHA experiences unanticipated increases in estimated credit subsidy costs (upward re- estimates), balances in the MMI Capital Reserve account help to cover these increases. What Are the Sources and Fiscal Characteristics of the Funding? The MMI Capital Reserve account accumulates negative subsidies resulting from FHA’s single-family mortgage insurance activities. It also earns interest and realizes gains on investment in nonmarketable Treasury securities. In the event the MMI Capital Reserve account is depleted, FHA is authorized to draw on permanent and indefinite budget authority to cover additional increases in estimated credit subsidy costs. The MMI Capital Reserve account contains only mandatory funds, and those funds are available until expended. What Factors Affect the Size or Composition of the Carryover Balance? As shown in figure 13, carryover balances declined during the 6 years we reviewed, with a steep decline in 2009 and 2010. Because the spendout rate in this account is very quick, and in effect obligations are equivalent to outlays, nearly all of the carryover balances are unobligated. The MMI Capital Reserve account maintains balances to cover unexpected insurance claim expenses, so when FHA experiences financial stress, the account balance decreases. For example, when the MMI Fund experiences higher-than-expected mortgage defaults (resulting in higher claims), there will be an upward re-estimate of credit subsidy costs. When there is an upward re-estimate, the MMI Capital Reserve account is the first source of funds used to cover the higher costs, thus lowering balances in the account. During the housing crisis that began in 2007, more pessimistic forecasts of economic conditions—house prices, in particular—resulted in higher projected insurance claims. As a result, balances in the MMI Capital Reserve account fell dramatically. From 2009 through 2012, FHA has submitted upward credit subsidy re-estimates ranging from about $6.8 to $10.5 billion annually. If upward re-estimates were to deplete the balance in the MMI Capital Reserve Account, FHA would need to draw on permanent, indefinite budget authority to have sufficient reserves for all future insurance claims on its existing portfolio. How Does the Agency Estimate and Manage Carryover Balances? The MMI Fund is reviewed from actuarial, financial and budgetary perspectives each year, which helps officials estimate carryover balances. From the actuarial perspective, FHA is statutorily required to ensure that the MMI Fund maintains a 2-percent capital ratio (discussed in more detail below). For its financial statements, FHA calculates the liability for loan guarantees, which represents the net present value of future cash flows on FHA’s existing portfolio. From a budgetary perspective, FHA must follow the Federal Credit Reform Act of 1990, which requires the agency to estimate and re-estimate the net lifetime costs of the mortgages it insures. The MMI Capital Reserve account holds funds to help FHA meet the statutory 2-percent capital ratio requirement for the MMI Fund and, as previously noted, maintains balances for unexpected claim expenses. The capital ratio is defined as the MMI Fund’s economic value divided by the total insurance-in-force. The balance in the MMI Capital Reserve account is one component of the economic value, along with the balance in the MMI Financing Account (which maintains balances to cover estimated credit subsidy costs) and the net present value of future cash flows. However, as a result of the economic downturn and housing crisis, the fund did not meet this requirement from 2009 through 2012. To help increase balances in the MMI Capital Reserve account and bring the MMI Fund back into compliance, FHA has implemented policy changes, including increases in borrower insurance premiums and enhanced underwriting requirements. As shown in figure 14, actual unobligated balances were lower than originally estimated for all years. Current year estimates were very close for almost all years during the period. This shows that the agency’s process for estimating unobligated balances assumed an accurate trend in most years, but tended to overestimate future balances, especially in fiscal year 2012. Account Name: Agency: National Priority: Housing Assistance (Budget Subfunction 604) What Mission and Goals Is the Account or Program Supporting? The Homeless Assistance Grants (HAG) account funds two primary grant programs: the Continuum of Care program and the Emergency Solutions Grant program. The Continuum of Care program is HUD’s largest and broadest targeted program to provide funds to address homelessness and the Emergency Solutions Grant Program includes funds for a variety of activities such as rapid re-housing. Grants through the Continuum of Care program are awarded through a national competition. An average of 87 percent of those funds per year goes to renew existing projects. The process to register, compete, and award grants is done on a calendar year basis and generally takes about seven months. The grant competition cycle begins after the agency receives a final annual appropriation. Once grants are awarded, funds are obligated and disbursed to grantees. What are the Sources and Fiscal Characteristics of the Funding? Funds in the HAG account are discretionary, provided through annual appropriations, and available for obligation for three years. In addition, in June 2008, the account received one multi-year supplemental appropriation of $50 million for aid to the State of Louisiana for the provision of 3,000 units of permanent supportive housing. What Factors Affect the Size or Composition of the Carryover Balance? The carryover balance in the HAG account ranged from $4 billion to $5.6 billion during fiscal years 2007 through 2012. As shown in figure 15, unobligated balances remained steady over the six year period. In 2009, there was an increase in the obligated balance that subsequently decreased in later years. A portion of the unobligated balance results from the timing of the grant competition and award process, which is done by calendar year rather than fiscal year. For HAG, the typical grant competition cycle involves several phases. First, once the agency has received its annual appropriations, it then opens a program registration period. After the agency reviews registrations, it opens the grant application process. Following the final application review, grants are awarded at a later date during the current or next calendar year (likely during a new fiscal year). At that point the available multi-year funds will be obligated. Because grant awards are typically made after the end of the fiscal year, unobligated balances are carried over from one fiscal year to the next. Officials said the increase in the obligated, unexpended balance that occurred in 2009 and 2010 was the result of two events. In 2009, the account received $1.5 billion for HUD’s Homelessness Prevention and Rapid Re-housing Program, which contributed to the increase in the obligated balance. In addition, the agency implemented a change in its process to review renewal grants. This change in process, which started in 2008, involved transitioning from a paper-based system to an electronic-based system. Officials said launching the new system caused a delay in awarding and disbursing grants, which resulted in an increased carryover balance. How Does the Agency Estimate and Manage Carryover Balances? Renewal grants account for the largest share of grant funding in the account and are typically funded before new grant applications are considered. Accordingly, HUD officials develop estimates of the account’s carryover balance based on the historical rate of grant renewal from previous years. This helps the agency project the number of grants that will likely be renewed in the future, thereby informing HUD’s estimate of the amount of funds that will be carried forward from one fiscal year to the next. As shown in figure 16, from 2007 through 2010 actual unobligated balances were slightly higher than initial estimates. From 2009 through 2012, the actual unobligated balance remained fairly steady while agency estimates predicted more fluctuation. The largest difference between the estimated and actual unobligated balance was in 2012. Generally speaking, it appears that the agency’s process for estimating unobligated balances was fairly accurate during the time period. Exchange Stabilization Fund (ESF) Department of the Treasury International Financial Programs (Budget Subfunction 155) What Mission and Goals Is the Account or Program Supporting? The Exchange Stabilization Fund (ESF) account was established by the Gold Reserve Act of 1934, to be operated under the exclusive control of the Secretary of the Treasury with approval of the President. The primary purpose of the fund is to stabilize international financial markets, consistent with U.S. obligations in the International Monetary Fund (IMF). To carry out this purpose, the Secretary is authorized to purchase, sell, or deal in gold, foreign currencies, and other instruments of credit and securities. The ESF holds international reserve assets of the United States, including U.S. dollars, foreign exchange, and Special Drawing Rights (SDR). If the maturity on an ESF loan or credit to a foreign entity or government will extend beyond six months, the President must give Congress a written statement that unique or emergency circumstances exist. What Are the Sources and Fiscal Characteristics of the Funding? The account received a $2 billion appropriation in 1934 when the fund was created. The Bretton Woods Agreements Act of 1945 directed the Treasury Secretary to pay $1.8 billion from the ESF to the IMF for the initial U.S. quota subscription in the IMF, thereby reducing ESF’s appropriated amount to $200 million. Since that time, the major sources of the fund’s income have been (1) gains (or losses) due to changes in exchange rates, (2) SDR allocations, and (3) earnings on investments held by the fund including interest earned on fund holdings of U.S. Government securities or interest on loans or credits to foreign governments. Amounts in the ESF, which is a public enterprise revolving fund,administrative expenses. What Factors Affect the Size or Composition of the Carryover Balance? As shown in figure 17, the actual total carryover balance increased dramatically in fiscal year 2009 and remained at that level through fiscal year 2012. The obligated portion of the balance held steady in fiscal years 2007 through 2009. By the end of fiscal year 2010, the obligated portion of the total carryover balance had grown significantly. The unobligated balance stayed fairly steady throughout fiscal years 2007 to 2012 with the exception of a dramatic spike in fiscal year 2009. Treasury officials said the largest single factor affecting the obligated balance is attributed to changes made to ESF’s budgetary reporting in fiscal year 2010. According to Treasury officials, prior to fiscal year 2010, the U.S. Standard General Ledger (USSGL) did not support the budgetary transactions of the ESF, thereby making it impossible to do ESF reporting using government-wide automated standards. The USSGL Board established a standard ledger account specifically for the ESF as a step toward correcting the budgetary reporting. Treasury’s implementation of this reporting change resulted in significant adjustments to ESF account balances and caused the size of the obligated balance to grow as a share of the total carryover balance in fiscal years 2010 through 2012. The spike in the unobligated balance in 2009 is almost entirely attributed to approximately $50 billion of new SDR allocations and one transaction that occurred over a one-month period. Pursuant to decisions made by the IMF membership, the IMF provided general and special SDR allocations to its members, including approximately $47.3 billion to the United States. As noted earlier, SDRs are held in the ESF. Treasury officials said the general allocation was an important element of the response to the global economic crisis. Further, they said the IMF general allocation provided an additional reserve buffer for IMF member countries that was critical to stopping the capital drain from emerging market countries and restoring global market confidence. In addition, Treasury monetized $3 billion of its ESF SDRs in a transaction authorized under the Special Drawing Rights Act. How Does the Agency Estimate and Manage Carryover Balances? Treasury officials said they do not estimate carryover or unobligated balances for the ESF as they do in other accounts (e.g., salaries and expenses). The balances represent investments and the unobligated balance is calculated from ESF’s balance sheet: assets minus liabilities. Account estimates reported in the President’s Budget each fiscal year are based on projected net interest earnings on ESF assets. The estimates are subject to considerable variance, depending on changes in the amount and composition of assets and the interest rates applied to investments. The estimates make no attempt to forecast gains or losses on SDR valuation or foreign currency valuation. As shown in figure 18, the nearly $50 billion of SDR-related transactions in fiscal year 2009 led to an actual unobligated balance in the ESF that was much larger than was estimated in fiscal year 2007 when the President’s fiscal year 2009 budget was developed. Moreover, these transactions had a direct impact on Treasury’s estimates of the unobligated balance in subsequent years. Treasury officials said the general timing of those transactions was concurrent with the agency’s development of its fiscal year 2010 mid-year estimate and fiscal year 2011 budget-year estimate for the account. Generally, in proportion to the size of the account, Treasury’s process for estimating the unobligated balance was relatively accurate. Government Sponsored Enterprise (GSE) Preferred Stock Purchase Agreements Department of the Treasury Agency: National Priority: Mortgage Credit (Budget Subfunction 371) What Mission and Goals Is the Account or Program Supporting? In 2008, the Housing and Economic Recovery Act (HERA) established the Federal Housing Finance Agency (FHFA). FHFA placed two Government Sponsored Enterprises (GSE)—the Federal National Mortgage Association (Fannie Mae) and the Federal Home Loan Mortgage Corporation (Freddie Mac)—into conservatorship. Treasury then entered into agreements with Fannie Mae and Freddie Mac to provide capital through investments in senior preferred stock to ensure that each company maintained a positive net worth. Treasury disburses funds to the GSEs if, at the end of any quarter, the liabilities of either GSE exceed its assets. Amendments to the agreements in 2009 changed the maximum allowable funding commitment, which is reflected in the unobligated balances. Initially in 2008, Treasury was authorized to purchase up to $100 billion of securities and investments in each GSE for a total of $200 billion. The first amendment in May 2009 increased the allowable investment level to $200 billion for each GSE. In December 2009, the second amendment changed the investment level so it was based on a formulaic cap that would automatically adjust upwards quarterly by the cumulative amount of any losses realized by either GSE and downward by the cumulative amount of any gains, but not below $200 billion. What Are the Sources and Fiscal Characteristics of the Funding? HERA established temporary authority for the GSE Purchase Agreements account and granted the Treasury Secretary temporary authority to purchase obligations and securities. The Secretary has complete discretion over the terms, conditions, and amounts of the purchases, provided that he or she (1) designates the actions as necessary, (2) takes specific considerations into account, and (3) reports on these matters to Congress. Any funds expended under this authority are deemed as appropriated in such sums as needed. What Factors Affect the Size or Composition of the Carryover Balance? As shown in figure 19, the actual total carryover balance in the account grew significantly from fiscal years 2007 through 2009 and subsequently declined through fiscal year 2012. The spendout rate in this account is very quick, which results in the agency reporting no obligated balance in the account by year-end. In effect, the actual payments to the GSEs are equal to the amount of obligations incurred and outlays reported in the President’s Budget Appendix. Treasury officials said the unobligated portion of the balance represents the cash balance in the account. The carryover balance is generally driven by the size of the apportionment to the account and the amount of purchases made by the account. For example, the account started with a $200 billion balance in budget authority when it was created under HERA in 2008. In 2009, the maximum allowable funding commitment was increased by an additional $200 billion for the account. Treasury also made its first payments to the GSEs equal to $95.6 billion. As shown in figure 19, this translated into an unobligated balance of $304.4 billion at the end of fiscal year 2009. Prior to making payments to the GSEs, Treasury received an apportionment of its budget authority from OMB at the beginning of the fiscal year for the unobligated balance brought forward in the account. The gradual decline in the unobligated balance reflects Treasury’s subsequent payments to the GSEs in fiscal years 2010 through 2012, bringing the balance down to $212.5 billion by the close of fiscal year 2012. How Does the Agency Estimate and Manage Carryover Balances? Budget year estimates of the unobligated balance are derived from the estimated draws on the account from year to year. To do this, Treasury annually prepares a series of long-range forecasts to determine the estimated amount of contingent liability to the GSEs under the purchase agreements. These projections inform the agency’s estimation of payments to the GSEs for a given year. Based on the size of the estimated payments, Treasury estimates the amount of budget authority that will be carried over into the next year. By the time the agency develops current year estimates, agency officials said they typically know what the size of the payment request will be, thereby contributing to a more accurate estimate of the unobligated balance that is carried forward. This informs apportionment decisions to ensure there are sufficient funds to cover the payments to the GSEs. As shown in figure 20, there were no estimated unobligated balances in fiscal years 2007 through 2009. This is attributed to the timing of the President’s budget preparation for each of those years, and to the creation of the Purchase Agreement account. The account was established after the enactment of HERA in 2008, which meant that the account’s estimates were issued for the first time in the President’s fiscal year 2010 budget. Similarly, there were no actual unobligated balances to report prior to 2008. During fiscal years 2010 through 2012, Treasury’s estimates of the unobligated balance in the account were slightly lower than, but fairly close to, the actual balance at year-end. In proportion to the size of this account, this suggests that the agency’s estimation process for this account was relatively accurate in those years. Appropriation: Budget authority to incur obligations and to make payments from the Treasury for specified purposes. An appropriation act is the most common means of providing appropriations; however, authorizing and other legislation itself may provide appropriations. Annual appropriation: An act appropriating funds enacted annually by Congress to provide budget authority to incur obligations and make payments from the Treasury for specified purposes. Supplemental appropriation: An act appropriating funds in addition to those already enacted in an annual appropriation act. Supplemental appropriations provide additional budget authority usually in cases where the need for funds is too urgent to be postponed until enactment of the regular appropriation bill. Supplemental appropriations may sometimes include items not appropriated in the regular bills due to a lack of timely authorizations. Availability: Budget authority that is available for incurring new obligations. Budget authority: Authority provided by federal law to enter into financial obligations that will result in immediate or future outlays involving federal government funds. Budget function: The functional classification system is a way of grouping budgetary resources so that all budget authority and outlays of on-budget and off-budget federal entities and tax expenditures can be presented according to the national needs being addressed. National needs are grouped in 17 broad areas to provide a coherent and comprehensive basis for analyzing and understanding the budget. Budget year: A term used in the budget formulation process to refer to the fiscal year for which the budget is being considered, that is, with respect to a session of Congress, the fiscal year of the government that starts on October 1 of the calendar year in which that session of Congress begins. Carryover balance (unexpended balance): The sum of the obligated and unobligated balances. Commitment: An administrative reservation of allotted funds, or of other funds, in anticipation of their obligation. Continuing resolution: An appropriation act that provides budget authority for federal agencies, specific activities (or both) to continue in operation when Congress and the President have not completed action on the regular appropriation acts by the beginning of the fiscal year. Current year: A term used in the budget formulation process to refer to the fiscal year immediately preceding the budget year under consideration. Deobligate: A cancellation or downward adjustment of previously incurred obligations made by an agency. Deobligated funds may be reobligated within the period of availability of the appropriation. Discretionary spending: Outlays from budget authority that is provided in, and controlled by, appropriations acts. Expended funds: Funds that have actually been disbursed or outlaid. Mandatory spending: Budget authority that is provided in laws other than appropriation acts and the outlays that result from such budget authority. Mandatory spending includes entitlement authority (for example, Food Stamp, Medicare and veterans’ pension programs), payment of interest on the public debt, and nonentitlements such as payments to states from Forest Service receipts. Obligated balance (obligated funds): The amount of obligations already incurred for which payment has not yet been made. Technically, the obligated balance is the unliquidated obligations. Budget authority that is available for a fixed period expires at the end of its period of availability, but the obligated balance of the budget authority remains available to liquidate obligations for five additional fiscal years. At the end of the fiscal year, the account is closed and any remaining balance is canceled. Budget authority available for an indefinite period may be canceled, and its account closed if (1) it is specifically rescinded by law or (2) the head of the agency concerned (or the President) determines that the purposes for which the appropriation was made have been carried out and disbursements have not been made from the appropriation for 2 consecutive years. Obligation: An obligation is a definite commitment that creates a legal liability of the government for the payment of goods and services ordered or received, or a legal duty on the part of the United States that could mature into a legal liability by virtue of actions of another party. Spendout rate: The rate at which budget authority becomes outlays in a fiscal year. It is usually presented as an annual percentage. Unexpended balance: The sum of the obligated and unobligated balances. Unobligated balance (unobligated funds): The portion of obligational authority that has not yet been obligated. For an appropriation account that is available for a fixed period, the budget authority expires after the period of availability ends but its unobligated balance remains available for 5 additional fiscal years for recording and adjusting obligations properly chargeable to the appropriations period of availability. For example, an expired, unobligated balance remains available until the account is closed to record previously unrecorded obligations or to make upward adjustments in previously under recorded obligations (such as contract modifications properly within scope of the original contract). At the end of the fifth fiscal year, the account is closed and any remaining balance is canceled. For a no-year account, the unobligated balance is carried forward indefinitely until (1) specifically rescinded by law or (2) the head of the agency concerned (or the President) determines that the purposes for which the appropriation was made have been carried out and disbursements have not been made from the appropriation for 2 consecutive years. Susan J. Irving, (202) 512-6806 or [email protected]. In addition to the contact named above, Carol M. Henn, Assistant Director, Leah Q. Nash and Mary C. Diop made major contributions to this report. Also contributing to this report were Rob Gebhart,Tara Jayant, Kate Lenane, Felicia Lopez, John Mingus Jr., Robert Robinson, and Cindy Saunders. In addition, the following individuals provided programmatic expertise: Marcia Crosse, Shana R. Deitch, Cheryl Goodman, Marshall Hamlett, Vondalee R. Hunt, Thomas Melito, Paul Schmidt, Mathew J. Scire, William B. Shear, Andrea P. Smith, Bruce Thomas, and Steve Westley.
Given the fiscal pressures facing the nation, examination of balances carried forward into future fiscal years (carryover balances) provides an opportunity to identify areas where the federal government can improve and maximize the use of resources. GAO was asked to review issues related to federal carryover balances. GAO's objectives were to (1) identify key questions for congressional committees, managers, and other reviewers to consider when evaluating carryover balances, including whether to reduce them, and (2) describe how answering these key questions provides insight into why carryover balances may exist in selected accounts. GAO reviewed carryover balances from fiscal years 2007 through 2012 in eight selected accounts from the Departments of Defense (DOD), Health and Human Services (HHS), Housing and Urban Development (HUD), and Treasury. Account selection was based on several characteristics, including the average size of the balance, budget function, type of account, agency, and whether the account was composed of mandatory or discretionary funds. GAO is not making any recommendations. DOD, HHS, HUD, and Treasury generally agreed with our findings and provided technical comments that were incorporated as appropriate. HHS provided comments stating that conclusions drawn from the report may not apply across the board to all accounts. Carryover balances in fiscal year 2012 were $2.2 trillion, of which about $800 billion had not yet been obligated. Answering key questions during review of carryover balances provides insights into why a balance exists, what size balance is appropriate, and what opportunities (if any) for savings exist. Given that a single account may support a single program or multiple programs--or that multiple accounts may support a single program--these questions can be applied when evaluating balances at either the account or program level. Examination of balances may assist decision makers in identifying opportunities to achieve budgetary savings or redirecting resources to other priorities. However, the complexity of the federal budget is such that a case-by-case analysis is needed to understand how best to achieve these financial benefits. What mission and goals is the account or program supporting? Understanding the mission activities, goals, and programs the account supports provides information about whether a program needs to maintain a balance to operate smoothly, what size balance is appropriate, and whether opportunities for savings exist. Accounts GAO reviewed maintained balances to support activities such as long-term acquisition of military aircraft and public health emergency preparedness. What are the sources and fiscal characteristics of the funding? The sources and fiscal characteristics of the funding present different issues in changing the size of carryover balances. Accounts such as Treasury's Exchange Stabilization Fund, receive "such sums as may be necessary" and may require programmatic changes to effectively reduce balances. In such cases, simply reducing balances may have no economic benefit and could impose unnecessary administrative costs. If funds are discretionary, such as with HUD's Homeless Assistance Grants, balances can be controlled through appropriations acts. What factors affect the size or composition of the carryover balances? Understanding factors within and outside an agency's control that affect its "spendout rate" provides insight to the composition of the carryover balance as a whole. Funds in accounts that support activities such as certain procurement or disaster relief may be obligated fairly quickly, but are expended over a longer period as milestones are met or as grantees draw down funds. Accounts with quick spendout rates, such as those that provide cash payments to government-sponsored enterprises, disburse funds soon after obligation. How does the agency estimate and manage carryover balances? Understanding an agency's processes for estimating and managing balances provides information to assess how effectively agencies anticipate program needs and ensure the most efficient use of resources. For the mandatory accounts GAO reviewed, such as the Federal Housing Administration's Mutual Mortgage Insurance Capital Reserve account, agencies focused on future needs of the account and relied on economic indicators and historical trends to estimate future balances. For discretionary accounts GAO reviewed, such as the U.S. Army Corps of Engineers' Construction account, agencies used historical data combined with current variables to estimate carryover balances.
USPS is an independent establishment of the executive branch with a mission to bind the nation together through the personal, educational, literary, and business correspondence of the people. The Postal Reorganization Act of 1970 reorganized the former U.S. Post Office Department into the United States Postal Service. USPS’s current legal framework requires it to break even over time and intended it to be self-supporting requires it to provide a maximum degree of effective and regular postal services to rural areas, communities, and small towns where post offices are not self-sustaining; and prohibits it from closing small post offices solely because they are operating at a deficit, it being the specific intent of Congress that effective postal services be insured to residents of both urban and rural communities. USPS’s mission and role, and the processes used to carry out mail delivery and retail services, have evolved over time with changes in technology, transportation, and communications. Key events in postal history are listed in table 1. Most customers in the early development of the national post office had to pick up their mail from a post office. The first step toward universal delivery service was taken in 1863 when Congress declared that free city delivery would be established at post offices where income from postage was sufficient to pay all expenses of delivery. Mail delivery service was gradually extended to smaller cities and was later extended to rural areas in 1902. Advances in the delivery of mail coincided with transportation improvements. Various transportation modes developed throughout history have been used to transport mail, ranging from stagecoaches in the 1700s; steamboats, trains, and the Pony Express in the 1800s; and finally by airplanes, automobiles, and trucks in the 1900s. Furthermore, advances in transportation were particularly important in rural areas; rural delivery helped stimulate road improvements to these areas because passable roads were a prerequisite for establishing new delivery routes. Throughout the nation’s history, the post office has been a key component in the provision of postal services. Post offices proliferated throughout the 1800s as the United States’ territory grew and new postal routes were established. At the turn of the 20th century, the number of post offices reached its peak with nearly 77,000 offices, which was an average of 1 post office for every 1,000 residents in the country (see fig. 1). The number of post offices per capita declined throughout the 20th century, to an average of 1 post office for every 10,000 people in 2000. As transportation improved, it became easier for rural carriers to deliver mail to a wider area, which decreased reliance on post offices for being the primary delivery and collection point. Furthermore, rural carriers provided retail services as part of their routes, so customers did not have to travel to a post office for these services. Providing mail delivery and access to retail postal services is central to USPS’s mission and role. According to USPS officials, all USPS customers are eligible for free mail service and most receive delivery 6 days a week. Furthermore, all customers have access to retail services provided through the postal network, including the ability to purchase stamps from post offices or other retail facilities. Differences exist, however, in how USPS provides these services, particularly in where, when, and how customers receive the mail and have access to the postal network. These differences have always existed and have evolved with changes in technology, transportation, and communications. Delivery and retail decisions are made primarily by local staff (i.e., district employees and local postmasters), with overarching guidance provided by national USPS policies and procedures. These local field staff consider such factors as the number and location of delivery points in the area, the quality of the roads and transportation, employee safety, mail volume, projected costs, and the type of service offered in nearby areas. The following information in this section provides an overview of the current USPS delivery and retail networks, recent trends, and how decisions about delivery and retail networks are made. USPS’s legal and statutory framework provides the basis for its delivery services. For example, USPS is to “provide prompt, reliable, and efficient services to patrons in all areas and shall render postal services to all communities”; “provide a maximum degree of effective and regular postal services to rural areas, communities, and small towns where post offices are not self-sustaining”; “receive, transmit, and deliver throughout the United States…written and printed matter, parcels, and like materials”; and “serve as nearly as practicable the entire population of the United States.” These provisions are considered key parts of universal service and provide the general operational guidance for USPS. USPS has the ability to establish delivery service within these broad provisions but by law must operate in a break-even manner. A long-standing provision in the appropriations acts for USPS requires the continuation of 6-day delivery and rural delivery service. According to USPS officials, delivery decisions are made at the local levels. National policies outline overall operational guidance, but discretion is provided to local officials, including area and district managers and postmasters, to make delivery decisions in their respective areas. These local officials—who, according to national USPS officials, are most familiar with the area to be served—make decisions related to the type, frequency, and location of delivery service that will be provided to a given address. A summary of key decisions is included in table 2, and additional information on each of these decisions is provided in the following sections. According to USPS, local officials select the delivery method that provides service in the most efficient and cost-effective manner. They consider such factors as the number and location of delivery points in the area, the quality of the roads and transportation, employee safety, mail volume, projected costs, and the type of service offered in nearby areas. These local officials also are provided with specific manuals that contain national guidance on establishing delivery service, as well as on how to carry out these operations on a day-to-day basis. USPS customers are entitled to receive mail delivery service at no charge in one of two ways: via mail carrier or via the customer retrieving his or her mail at a designated postal facility. The majority of USPS’s residential and business deliveries are made by mail carriers—about 86 percent in fiscal year 2003. Approximately 14 percent of USPS’s deliveries are where customers travel to a USPS facility, primarily a post office, to retrieve their mail. These customers receive mail service either via box service or general delivery pick up. Box service may be provided (1) at no charge to customers who are not eligible for carrier service—this would represent their free mail delivery service—or (2) at a fee to customers to supplement their existing delivery service. Customers who are not eligible for carrier delivery and whose retail facility does not provide box service are provided with general delivery service where they retrieve their mail from a post office counter at no charge. While current mail recipients have access to mail delivery service at no charge—the cost of delivery is borne by postal ratepayers—the process by which customers are eligible for free delivery has recently been clarified. In the 1996 Mail Classification Case before the PRC, USPS proposed eliminating the fee for box service that it charged customers who were not eligible for carrier delivery. At the time of this case, USPS estimated that about 940,000 boxes would be offered free of charge as a result of this policy. This proposal, however, did not require USPS to offer a free box to all customers ineligible for carrier delivery, such as those ineligible due to their proximity to a postal facility (i.e., the quarter-mile rule). These customers would only be allowed to receive general delivery service at no charge. The PRC, in its ensuing recommendation, raised issues about inequities regarding customer eligibility for free box delivery and urged USPS to rectify them. USPS dropped the quarter-mile-rule provision during the following 1997 rate case, and, as such, customers ineligible for carrier service became eligible for free box service. According to USPS officials, most customers receive mail delivery 6 days a week, but there are customers who do not. These customers include businesses that are not open 6 days a week; resort/seasonal areas that are not open year-round; or areas that are not easily accessible due to transportation constraints, such as remote areas that may require the use of boats or airplanes to deliver the mail. For example, mail is transported by mules for delivery in the Grand Canyon, by snowmobiles for delivery in some areas of Alaska, and by boats for delivery on islands in Maine and other states. As previously stated, the majority of USPS customers receive carrier service. Once it is determined that a customer is eligible for carrier service, USPS determines the type of carrier route service that will be provided. USPS has three primary carrier route categories—city, rural, and highway contract routes—and has national policies and procedures that contain the criteria used to establish, manage, and operate the three types of routes. Excerpts from these policy and procedure manuals are provided in table 3. USPS’s “rural” designation does not necessarily reflect geographically defined rural areas, and there is no population threshold for a USPS- designated rural or city route. Rural carrier routes encompass a wide range of geographic areas and may cover both less-densely populated areas generally considered to be rural as well as suburban areas generally considered to be urban. A USPS rural route such as one in Charlotte, North Carolina, or Jacksonville, Florida, may cover a geographically defined suburban area and may contain a similar number of delivery points as a city route. USPS officials explained that many suburban areas met rural route criteria when they were originally established. However, they also stated that although the population may have grown in an area that now may be considered suburban, USPS maintains existing operations in this situation. Thus, it retains the rural route classification. A brief overview of these route types is provided in table 4. City routes (67 percent of all routes) tend to be located in densely populated areas with high concentrations of delivery points. As figure 2 shows, growth in city routes has stagnated since 1994 and has been declining since 2000, while growth in rural routes continues. Rural routes, accounting for only about 29 percent of all routes, are the fastest growing type of route. Of the 1.8 million delivery points added in fiscal year 2003, 1.2 million delivery points are located on rural routes. Rural routes encompass a wide range of areas, with some of the larger routes serving hundreds of delivery points and some smaller routes having just 1 delivery per mile. Not only are rural routes the fastest growing route type, the number of deliveries per route (route density) is also increasing. Figure 3 shows that rural routes with 12 or more deliveries per mile have been increasing at a much faster rate than rural routes with fewer than 12 deliveries per mile. Furthermore, as shown in table 4, the average deliveries per route for city routes and rural routes are relatively similar, 513 and 474, respectively. The remaining 4 percent of routes are highway contract routes (10,065 in fiscal year 2003), which serve areas that are not serviced by city or rural routes. On some of these routes, deliveries are made along the line-of-travel to individual addresses as mail is being transported from one facility to another. USPS has guidance to help determine the physical location where the mail will be delivered. USPS works with local real estate developers when determining the locations of delivery for new addresses and has three general modes of delivery that specify the physical location of the delivery: door, curbline, or a centralized unit that contains mail receptacles for multiple customers. Figure 4 provides the number of these modes of delivery as of the end of fiscal year 2003. Door delivery once was the norm in urban settings; however, USPS changed its policy in 1978 to limit additional door deliveries to further enhance delivery efficiencies (door deliveries remain the most expensive mode of delivery). As a result, curbline delivery and centralized delivery are the fastest growing modes of delivery. According to USPS delivery officials, the only instance where new delivery points would receive door delivery would be if the new delivery point is established on a block that currently receives door delivery. Centralized units include cluster boxes, Neighborhood Delivery Collection Box Units (NDCBU), and apartment- style boxes. Cluster box units are centralized units of individually locked compartments, while NDCBUs are centralized units of more than eight individually locked compartments. Between fiscal years 2001 and 2003, the number of curbline boxes increased by more than 2 million; the number of centralized boxes has grown by about 1.8 million; and the number of other deliveries (primarily door), which are not available for most new deliveries, decreased by almost 400,000. According to USPS, it must balance the legal requirement to operate as a break-even entity with the need to serve its customers in a competitive environment. As such, the following cost and customer convenience trade- offs are associated with each of the previously discussed delivery decisions. Carrier service v. the customer collecting mail from a USPS facility. If carrier service is provided, USPS incurs the cost of providing the personnel and transportation to support these services, but most customers receive their mail closer to their residences or businesses. On the other hand, requiring customers to travel to their respective post office to collect mail may be more inconvenient for the customer, but USPS does not have to incur personnel and transportation costs associated with carrier delivery. 6-day-a-week delivery v. something less than 6-day-a-week delivery. According to USPS, the more days that delivery is provided, the higher the cost of providing this service. On the other hand, customers, Congress, and the President’s Commission have noted the importance of 6-day-a-week delivery. USPS studied the impact of eliminating Saturday delivery and found that the possible savings were not significant enough to offset the potential risks that any reduction in delivery days would have a negative impact on USPS’s competitive position. City v. rural v. highway contract box delivery routes. Although the delivery service provided on each of these routes is generally similar, carriers on rural and highway contract routes provide retail services, such as stamp sales, while city carriers do not. A USPS official stated that there are significant cost differences between the different types of routes. USPS estimated that the additional annual cost in fiscal year 2003 for each city door delivery ($295) was more than twice as expensive as rural delivery ($143) and over three times as expensive as highway contract deliveries ($90). The USPS official stated that a key factor in determining the total cost of a route is the carriers’ compensation systems, which differ for each group of carriers. The systems for city and rural carriers are collectively bargained between USPS and their associated unions—the National Association of Letter Carriers (NALC) represents city carriers, and the National Rural Letter Carriers Association (NRLCA) represents rural carriers. Generally speaking, city carriers are compensated on an hourly basis, while rural carriers are compensated on a salary basis. Agreements entered into by these groups also establish duties and responsibilities for the carriers and USPS management. Compensation for contract carriers are established via the contract posted by USPS. Door v. curbline v. centralized modes of delivery. According to USPS, the cost per delivery generally increases as the delivery is made closer to the customer’s door. Delivery to a customer’s door is the least efficient mode of delivery because the carrier has to dismount from the vehicle. Deliveries to centralized units, such as cluster boxes and NDCBUs, are the most efficient form of carrier delivery because carriers can make multiple deliveries in one stop. The mode of delivery to be provided is considered by USPS when determining the type of service that will be used. For example, most deliveries on rural and highway contract routes are farther away from the customer’s front door than deliveries on city routes. Figure 5 shows that most rural and highway contract deliveries in fiscal year 2003 were to the curb rather than at the door. Other delivery (primarily door) USPS local officials select the delivery option that provides service in the most efficient and cost-effective manner, and they consider numerous cost and customer service factors when making these decisions. This balance between cost and customer convenience is further illustrated in our discussion of USPS’s retail network. As part of meeting its universal service obligation, USPS is required to do the following: USPS should serve as nearly as practicable the entire U.S. population and provide postal facilities in such locations that give postal patrons ready access to essential postal services consistent with reasonable economies of postal operation. USPS should provide a maximum degree of effective and regular postal services to rural areas, communities, and small towns where post offices are not self-sustaining. No small post office shall be closed solely for operating at a deficit, it being the specific intent of Congress that effective postal services be insured to residents of both urban and rural communities. Historically, post offices, stations, and branches served as the primary access points for providing postal services to most customers. These facilities were located in towns and communities across the country and provided key locations where mail could be collected and delivered. Figure 6 illustrates the current network of these retail facilities. In addition to traditional brick-and-mortar retail facilities, USPS currently offers retail services through other alternatives, such as self-serving vending machines, ATMs, grocery and drug stores, and the Internet. Figure 7 illustrates many of these retail alternatives. Postal services available through these access points can include purchasing stamps and postage, mailing packages, and sending money orders. Differences exist, however, in how access to retail service is provided to customers across the country. These differences (1) exist in terms of what types of retail options customers have access to and where these retail options are located and (2) are based on cost and customer service determinations made by local USPS officials. This section identifies access points currently provided by USPS, describes differences in the network, and explains why these differences exist. The wide variety of retail options currently offered by USPS differs significantly from its original retail network. Changes in technology, transportation, and geography diminished the need for a large network of post offices, and the number of post offices per capita has consistently declined since the early 1900s (see fig. 1). Table 5 shows that over the last 20 years, the number of post offices, stations, and branches has decreased by over 1,900 units. This decrease reflects USPS’s movement toward fewer “bricks-and-mortar” facilities. USPS still has almost 28,000 post offices nationwide, and these post offices remain a key access point for USPS’s nationwide retail network. USPS does not have specific standards for establishing post offices on the basis of population density or distance between post offices. The number of post offices and retail facilities compared with the population of the area served differs throughout the country. Appendix II provides information on the number of retail postal facilities in each state, along with each state’s population. For example, states such as North Dakota and South Dakota that have a relatively low-population density tend to have a lower ratio of people per USPS retail facility (i.e., fewer than 2,000 residents for every USPS retail facility). On the other hand, states such as Florida and California that have a relatively high-population density tend to have a higher ratio of people per USPS retail facility (i.e., about 15,000 residents for every USPS retail facility). Postal officials told us that customers of smaller post offices tend to be more dependent on their post office for access to the postal network. Survey data collected for the President’s Commission showed that rural customers reported visiting their post offices more often than customers in urban areas. This issue of dependency is important to note when considering retail access because USPS recognizes that customer use of post offices versus other retail alternatives varies. For example, according to USPS, many of the new retail alternatives, such as consignment with private retailers such as grocery stores and ATMs, have been deployed primarily in high-growth, high-population areas (coincidentally, these areas are where many retail competitors to USPS are located). Customers in these high-growth, high-population areas may not be as dependent on a post office for meeting their daily postal needs, and therefore they utilize these alternative methods of accessing USPS’s retail network. Differences exist throughout the postal network in terms of how and where customers have access to USPS’s retail network. USPS officials stated that USPS’s approach to the retail network requires a balance of cost and service considerations and incorporates such factors as customer demand, the population of the surrounding area, the post office’s physical location, mail volumes, costs, and revenues. Many of USPS’s retail alternatives are aimed at offering more efficient, accessible ways of providing retail service, particularly in high-growth, high-revenue areas. When deciding where to deploy these alternatives, USPS officials told us that they consider both (1) the location where a retail option is needed and (2) the type of retail options that should be deployed. They also consider customer access needs while balancing economy and efficiency concerns. For example, it is more costly for USPS to provide retail service at a post office counter than via its Web site—www.usps.com. Some customers, however, may not have Internet service or may prefer going to their local post office to conduct their postal transactions. Moreover, USPS has stated that opening new post offices is considered only when area service needs cannot be met through its current facilities or by less costly alternatives. USPS has stated that whenever possible it establishes contract postal units, which can provide equal service without the costs associated with building and operating new post offices. These units are privately owned and operated and, as such, are less expensive. USPS opened 666 contract stations and branches in fiscal year 2003. USPS faces the continuing challenge of providing high-quality postal services while absorbing the costs associated with an ever-increasing delivery network. USPS estimated that serving its new delivery points in fiscal year 2003 would add roughly $270 million in annually recurring delivery costs. At the same time, USPS’s revenue per delivery declined each year since fiscal year 2000. USPS and other stakeholders have recognized these challenges, which have been highlighted as part of USPS’s Transformation Plan and the President’s Commission’s report. USPS has taken actions, and is planning future actions, to deal with these challenges and improve the efficiency and effectiveness of its delivery and retail networks. These actions will include the following: on the delivery side, emphasizing cost-effective routes and delivery locations (i.e., to curbline boxes); and on the retail side, providing low-cost alternatives and optimizing its retail network. The actions planned in the delivery area may not result in a noticeable change of service for people in rural areas. Customers in rural areas may experience greater access to USPS’s retail network via improvements to www.usps.com, but actions to promote other low-cost alternatives are primarily targeted toward customers in high- growth, high-density areas. Furthermore, it is not clear how rural customers may be impacted by USPS’s efforts to increase efficiencies by optimizing its retail network. This section provides an overview of the actions that USPS is planning to take, in both the delivery and retail areas, and what these actions are intended to achieve. USPS has high costs related to its nationwide infrastructure and transportation network, which includes delivering mail 6 days a week to most of the 141 million addresses nationwide. Achieving efficiencies in this area is difficult because the network grows by approximately 1.7 million new addresses each year. Mail volumes have recently been decreasing, and USPS is facing increasing per-piece delivery costs since carriers must make deliveries even if they have fewer letters to deliver. As previously discussed, USPS has already taken actions to improve delivery efficiency, including promoting rural routes and emphasizing curbline and centralized modes of delivery. These actions are likely to continue in both suburban and rural areas. USPS’s initiatives for increasing rural delivery efficiency may not be noticeable to rural customers because they relate to improving internal USPS operations, rather than changing residential delivery. These initiatives include sending managers through a training program to ensure that they understand the basic concepts of managing rural delivery; distributing electronic operations newsletters that provide specific strategies for reducing rural workhours and raising awareness of the need to focus on rural management; and implementing a rural time review, which is a process to examine and analyze the timekeeping, recording, and reporting process for rural delivery. On a more comprehensive, nationwide basis, USPS has implemented initiatives aimed at increasing the efficiency of the overall delivery network. USPS has established a route optimization effort meant to help determine the best way to route carriers. USPS hopes this effort will lead to a reduction in workhours, vehicle mileage, and costs, while at the same time improving safety. According to USPS, automation improvements, such as the Delivery Point Sequencing of mail, will increase efficiency by automating some of the mail sorting activities that are currently done manually by mail carriers. This automation would decrease the amount of time that a carrier would spend sorting the mail and increase the amount of time that a carrier could be out making more deliveries. Both USPS and the President’s Commission have recognized that USPS needs to adjust its retail network so that it provides the optimal level of retail access at the lowest possible cost. The retail service options available to rural customers will largely remain the same, with the exception of rural customers who have access to the Internet (www.usps.com). USPS officials stated that USPS plans to deploy most other new retail alternatives in high-growth, high-density areas, such as fast-growing suburbs. However, it is not clear why some retail alternatives that offer greater customer convenience, such as stamp purchases at grocery or other retail stores, may not be provided to those in rural areas. Further, it is not clear how rural customers may be impacted by USPS’s retail optimization efforts to close and/or consolidate retail facilities. In its Transformation Plan, USPS stated that its planned efforts to improve access to retail services for all customers while becoming more cost-effective include three key initiatives: (1) Create new, low-cost retail alternatives. USPS identified ways to provide cost-effective services that improve customer convenience and access by utilizing low-cost retail alternatives, such as the Internet, ATMs, and supermarkets. According to USPS, most of the alternatives are aimed at providing additional access to high-growth, high-revenue areas where demand for services is more concentrated and will not be available in less- populated areas. For example, many of the 666 contract postal units opened in fiscal year 2003 were in urban areas such as Los Angeles, California, and Orlando, Florida. However, USPS noted two alternatives that would be available to most customers, including those in rural areas— the Internet, for customers with access, and the recently implemented “Click-N-Ship” program. USPS’s Internet Web site is available to customers 24 hours a day, 7 days a week, and was designed to handle most retail transactions that take place in local post offices, such as printing shipping labels and postage for packages, buying stamps, sending money orders, and filing address changes. USPS’s Click-N-Ship program allows customers to print shipping labels for packages and pay for postage using their computers. Customers can arrange to have their mail carrier pick up the package, or they can leave it in a mail collection box or at their local post office. This carrier pick-up service is currently available in urban and suburban areas, and USPS and the NRLCA have recently agreed to conduct a nationwide pilot that would test this program on rural routes. (2) Move stamp-only transactions away from the post office window. The new, low-cost retail alternatives provide USPS with an opportunity to increase the efficiency of postal transactions. In fiscal year 2003, about one- third of the visits to USPS retail facilities included stamp purchases, and over 130 million visits were for stamp-only purchases. Smaller post offices tend to conduct a higher percentage of stamp-only transactions. As indicated in its Transformation Plan, window service at a post office is a relatively expensive way to provide stamp purchases when compared with low-cost alternatives, such as providing stamp purchases from ATMs, through the mail, from the Internet, or from a grocery store. In addition, residents on rural and highway contract routes can purchase stamps and other retail services from their mail carriers. USPS has begun to promote the use of these alternatives for postal transactions; in November 2002, USPS launched a national campaign promoting alternative access to postal products to create customer awareness of stamp-purchasing alternatives. Between fiscal year 2002 and 2003, the number of stamp-only visits at postal facilities decreased by about 25 million (a 16 percent reduction), and the number of stamp transactions decreased by about 60 million (a 10 percent reduction). (3) Optimize the retail network. As simple transactions such as selling stamps and printing shipping labels are redirected to lower cost alternatives, USPS plans to take actions to tailor retail services to the individual community needs and provide the optimal level of retail access at the least possible cost. USPS has established a nationwide database of its retail network that includes about 150 data points for each of its retail postal facilities, such as operating costs, revenues, proximity to other retail points, number of deliveries, and customer demographics. This database provides USPS with a baseline for evaluating its network, from which it plans to first focus its retail strategy on “underserved” locations. USPS then plans to focus on high-revenue locations, most of which are located in urban and suburban areas. Lastly, USPS will focus on “overrepresented” areas. The Transformation Plan stated that USPS would replace “redundant, low-value access points” with alternative access methods, but it did not provide information on the specific criteria that USPS would use to make this determination. It is unclear how post offices in rural areas may be affected by this initiative, because, as USPS stated in its Infrastructure and Workforce Rationalization Plan to Congress, “the savings from closing small post offices are minimal, since the potential savings in personnel and office rent are often more than offset by the additional cost of rural delivery service needed in lieu of post office box delivery.” Another approach, recommended by the President’s Commission, would be for USPS to optimize its retail network by assessing its “low-activity” post offices to determine if they are needed to ensure the fulfillment of universal service. If USPS determines that these post offices are needed, they should be retained, even if they are not economical. If not, the President’s Commission stated that USPS should work with the affected community to consider how to dispose of excess facilities. USPS has begun taking actions to optimize its retail network by lifting the self-imposed moratorium established in 1998 on closing post offices and by adjusting post office hours. During fiscal year 2003, USPS formally closed about 440 post offices and other retail facilities, more than half of which USPS had placed on emergency suspension. A post office can be placed on emergency suspension due to circumstances such as a natural disaster, sudden loss of the post office building lease when no suitable alternative quarters are available, or severe damage to or destruction of the post office building. An emergency suspension is one of three circumstances that may prompt USPS to initiate a feasibility study to determine whether to close a post office. The other two are (1) a postmaster vacancy and (2) special circumstances such as the incorporation of two communities into one. USPS plans to close 311 post offices in fiscal year 2004 that were placed on emergency suspension between February 1983 and June 2003. An additional 65 post offices that were placed on emergency suspension between August 2002 and November 2003 are not scheduled to close. USPS has reported that post office closures will continue, and that in a normal year about 100 to 200 small, rural post offices are closed when the communities in which these offices are located essentially disappear. According to USPS, it has also adjusted hours at existing post offices from time to time to reflect customer demand. Although USPS could not provide information on the number of post offices where changes in hours occurred in fiscal year 2003, it did provide a description of how hour adjustments are made. According to USPS officials, postmasters are responsible for establishing window service hours based on the needs of the community within the funding resources. Officials noted that they periodically assess the number of transactions and customer visits throughout the day to determine the appropriate hours, and that hours may be extended or shortened in response to customer demand. USPS reported that its efforts to increase efficiencies in its retail area have resulted in a decrease of almost 5 million workhours from fiscal years 2002 to 2003. USPS and the President’s Commission both have recognized the need for establishing a postal network that is capable of providing universal service in an efficient and cost-effective manner. The actions identified by USPS that were discussed in the previous section illustrate that future service decisions are being planned with a focus on increasing efficiency and customer service. According to surveys conducted both by USPS and for the President’s Commission, customers are generally satisfied with the services provided by USPS. However, when issues are raised by postal stakeholders, including Members of Congress, customers, and USPS employees, they generally relate to inconsistent delivery services and limited communication related to planned changes to the retail network. USPS has also raised issues about legal requirements and practical constraints that limit its flexibility to make changes to the postal network. Progress toward optimizing the postal retail network will require USPS to collaborate and communicate more effectively with stakeholders in order to raise their confidence that USPS’s actions will result in improved customer service and more cost-efficient operations. Data reflect that customers are generally satisfied with the services provided by USPS. USPS customer satisfaction data showed that 93 percent of households nationwide continue to have a positive view of USPS. USPS’s Customer Satisfaction Measurement survey gathers information from households and businesses throughout the country, and the residential survey includes questions on such topics as mail delivery service, retail options, time waiting in line at post offices, and USPS advertising. A survey was also conducted as part of the President’s Commission’s work to determine the public perception of USPS. This survey reported that customers throughout the country, including those in cities and rural areas, have a favorable view of USPS. Although it is reported that overall customer satisfaction is high, when customers do raise concerns, many relate to inconsistencies in delivery services and changes in access to retail services. For example, in the first 2 quarters of fiscal year 2004, USPS’s customer telephone system— Corporate Customer Contact—documented over 1.3 million calls that raised customer issues. As table 6 shows, these issues fell into five general categories. The delivery/mail pickup category contained the most customer complaints with over 88 percent of the total customer issues. These calls included issues about late deliveries, changes in the location of the customers’ deliveries, and misdeliveries. We reviewed a sample of letters received by Members of Congress involved in the oversight of USPS in 2002 and 2003. Of the 134 letters that we reviewed, the most common delivery-related concerns pertained to the mode of delivery that was used and mail arriving late or at inconsistent times. On the retail side, the issues raised most frequently were concerns about potential post office closings or relocations. Several customers wrote that closing or relocating post offices would make it difficult or inconvenient for them to access retail postal services. In addition to constituent letters containing specific questions about USPS operations, Congress has raised long-standing issues about the basic provisions of universal service and retail access, particularly to customers in rural areas. The Postal Reorganization Act contained specific provisions requiring that effective postal services would be ensured to residents of both urban and rural communities. Congress had additional concerns about community involvement in decisions to close or consolidate post offices. In 1976, it amended the Postal Reorganization Act and established specific requirements for USPS when attempting to close a post office, including that USPS must consider the effects on the community served, the employees of the facility, and economic savings to USPS that would result from the closure, as well as provide notice to customers. This amendment sought to involve communities in decisions, which would help to ensure that these decisions were made in a fair, consistent manner. The amendment also established an appeals process to the PRC to allow for independent review of decisions to close or consolidate post offices. Congress has long included language in USPS annual appropriations legislation forbidding the closure or consolidation of small, rural post offices. The closure requirements added by this amendment, however, did not apply to postal facilities that were to be expanded, relocated, or newly constructed, and Congress remained concerned that communities were not sufficiently involved in decisions regarding their post offices. In 1998, USPS responded to these concerns by establishing regulations relating to the expansion, relocation, or new construction of post offices that required local officials and citizens to be notified, provided affected customers with a chance to provide comments, and required USPS officials to consider this community input. However, postal facilities placed in emergency suspension were not subject to the post office closure or consolidation requirements. A 1999 congressional hearing focused on USPS’s closure process when some stakeholders raised concerns that USPS might be using its emergency suspension procedures to avoid post office closure requirements. We issued a report on emergency suspensions in 1997 and found that between the beginning of fiscal year 1992 through March 31, 1997, USPS had suspended the operations of 651 post offices, some of which had been in suspension over 10 years. After USPS lifted its 1998 moratorium on closures in 2003, USPS began to close most of its suspended post offices. Concerns remain about the extent to which customers are included in retail decisions as evidenced by the fact that current Members of Congress continue to introduce legislation related to USPS’s process for closing post offices and ensuring that communities are involved in the decision-making process. Employee groups are concerned with USPS’s attempts to make changes to the postal network. For example, these groups have raised issues about the perceived lack of communication from USPS about how it makes these decisions. Carrier unions have also raised issues related to actions taken by USPS to establish and categorize carrier routes. Carrier compensation represents a significant portion of the total delivery costs, which is a key consideration in USPS’s delivery route decisions. For a number of years, USPS, the NALC, and the NRLCA have had a continuing dispute over the assignment of work jurisdictions for mail delivery. These disputes pertain to the conversion of city delivery to rural delivery, or vice versa, and the assignment of new deliveries (whether a new route will be a city route or a rural route). There were an estimated 1,300 disputes at the national and local levels related to this issue at the end of 2003. USPS and the two unions established a joint task force in May 2003 to expedite resolution of outstanding city/rural jurisdictional disputes. Furthermore, additional disputes regarding the process for conducting mail counts and route inspections have also been raised. Mail counts and route inspections are key factors in determining carrier duties and compensation, and thus total delivery costs. Mail counts and route inspections are used to identify the amount of mail sorted and handled by carriers during an average workday and what determines the efficiency of the current route structure. USPS has raised issues about its lack of flexibility to make necessary changes to its delivery and retail networks. Changes to USPS’s retail infrastructure are limited by both legal requirements and practical constraints. As previously mentioned, USPS by law cannot close a small post office solely because it is operating at a deficit. Furthermore, Members of Congress and other stakeholders have often intervened in the past when USPS has attempted to close post offices or consolidate postal facilities. Proposed post office closures have provoked intense opposition because local post offices are sometimes viewed as (1) a critical means of obtaining ready access to postal retail services, (2) a part of American culture and business, and (3) critical to the viability of certain towns or central business districts. In regards to its delivery network, USPS appropriations acts have included provisions on 6-day-a-week delivery and rural mail service, and there is strong stakeholder opposition to cuts in the frequency or quality of postal services. The President’s Commission agreed that USPS might need additional flexibility as part of establishing the proper configuration of a 21st century postal network; however, the commission stated that mechanisms are needed to ensure accountability and oversight. The Postmaster General has stated that without greater flexibility, it may become increasingly difficult for USPS to continue achieving cost savings, and that if USPS is unable to significantly restrain its costs, it may have to reconsider universal service as it is provided today. Although USPS faces some constraints to making changes, the previous section of this report illustrates that there are actions USPS could take to improve efficiency in the delivery and retail areas while improving customer service. For example, low-cost retail alternatives, such as the Internet, provide USPS with an opportunity to enhance customer access nationwide, while at the same time offering cost-effective and convenient ways to provide service. However, without more information about how USPS will make decisions related to changing its postal network, including closures or consolidations of existing facilities, it is difficult for customers to understand how they may be affected by these decisions. It is particularly important that customers in rural areas, who may be more dependent on their local post offices, are informed about how they may be affected by these decisions. We agree that actions are needed to restrain costs and that some legal and practical restraints limit USPS’s flexibility to make changes to its network. However, USPS’s communication with Congress and stakeholders about what it intends to do and how it intends to optimize its retail network is important so that stakeholders will have more confidence in USPS’s decisions. Stakeholders, who are a critical component of implementing successful changes, have raised concerns about potential changes to USPS’s network. Specifically, as previously mentioned, stakeholders have been concerned about a perceived lack of communication throughout USPS’s decision-making process. Examples include insufficient information regarding potential changes such as closing post offices or making adjustments to post office hours. Furthermore, recent postal reform legislation reflects concerns about the future provision of delivery and retail services. Both the House and Senate postal reform legislation introduced in May 2004—The Postal Accountability and Enhancement Act, H.R. 4341 and S. 2468—included provisions that required a study of universal postal service and what the future of universal service may entail. The Senate bill required USPS to provide Congress with a discussion of potential changes to its infrastructure, including its delivery and retail networks. This proposed plan provides an opportunity for USPS to provide Congress with additional information that will facilitate better understanding of what USPS hopes to accomplish through its optimization efforts and how it plans to make its decisions in this area. We have previously reported on the importance of keeping Congress and stakeholders informed throughout the decision-making process to successfully transform the Postal Service. Last November, we recommended that USPS develop an integrated plan to optimize its infrastructure and workforce, in collaboration with its key stakeholders, and make the plan available to Congress and the public. USPS agreed with this recommendation and in January 2004 presented its Infrastructure and Workforce Rationalization Plan to the House and Senate oversight committees. The plan included a section on improving its retail network by increasing access and customer convenience in a cost-efficient manner. Although this plan included a general discussion of initiatives that USPS is planning for its retail and delivery network, it did not explain how USPS planned to make decisions—that is, what specific criteria would be used as the basis for USPS decisions. For example, USPS has discussed general principles that it has established as a basis for its retail optimization strategy as outlined in its Transformation Plan and Transformation Plan Update. We previously discussed these principles, and they included targeting underserved areas, particularly in high-growth areas, and replacing redundant, low-value access points with alternative access methods. However, the plan did not discuss how USPS would define “underserved” areas for determining where new self-service options are to be located, or “redundant, low-value access points” that are to be replaced. It is not clear if USPS has consulted its customers, including those in rural areas, in developing its network optimization plans to determine their needs, their preferences on retail alternatives, and which postal facilities may be needed to provide postal services. Further, it is not clear if USPS’s optimization strategy related to removing redundant or excess postal facilities would follow the existing process for closing post offices, which essentially is a local decision in response to local circumstances, such as a postmaster vacancy, lease expiration, building damage, or an emergency. If such an incremental approach based on local decisions is used to implement USPS’s retail optimization strategy, it is not clear that the implementation would lead to the desired result of a systemwide optimal network overall. USPS’s retail optimization strategy could be an opportunity for a “win-win” outcome for both USPS and its customers, including those in rural areas, in that USPS could reduce its costs while at the same time improving access for its customers. According to USPS, it is already in the process of providing its customers with greater access to its services through a variety of new, more convenient alternatives. USPS has also initiated efforts that have increased efficiencies and cut costs and plans further actions in the future. However, many stakeholders, including Members of Congress, are concerned about the limited information and communication USPS has provided regarding its network optimization plans and how customers will be affected by its proposed changes. Without more information about how USPS will make decisions related to changing the current postal network, including closures or consolidations of existing facilities, it will be difficult for customers to understand how they may be affected—particularly those in rural areas who may be more dependent on their local post offices. Effective communication is needed to demonstrate that USPS wants to partner with its customers in communities nationwide to provide more convenient and cost-effective delivery and retail services and to preserve post offices needed to support universal service. Improved transparency and accountability mechanisms are also needed to raise stakeholder confidence that decisions will be made in a fair, rational, and fact-based manner. Such mechanisms could include a clear process to ensure that key stakeholders are consulted and properly informed of decisions that may affect them. Increasing communication and collaboration with key stakeholders may also help facilitate better understanding of the different challenges and needs facing USPS and its customers in urban and rural areas, the rationale for decisions, the cost implications related to budget and rate decisions, and the trade-offs involved with actions to achieve a more efficient and effective network. To facilitate USPS’s progress in implementing its planned actions aimed at improving efficiency in its postal network while increasing customer service, we recommend that the Postmaster General provide improved transparency and communication to inform Congress and other stakeholders of the actions it plans to take regarding its retail optimization strategy, including (1) the criteria USPS will use to make decisions related to changing its retail network; (2) the process it will use to communicate with postal stakeholders throughout the decision-making process; (3) the impact on customers, including those in rural areas; and (4) the time frames for implementing all phases of its retail optimization initiative. We received written comments on a draft of this report from the Acting Vice President of Delivery and Retail for USPS in a letter dated June 30, 2004. USPS’s comments are summarized below and reprinted in appendix III. USPS officials also provided technical and clarifying comments, which were incorporated into the report where appropriate. USPS’s letter concurred with “the spirit of the report’s findings” and acknowledged that USPS must continue to take steps to improve the efficiency and effectiveness of its delivery and retail networks. In response to the four specific provisions included in our recommendation, USPS stated the following: The criteria used to make retail decisions vary because retail optimization is a dynamic and evolving process, and the key to making any postal decision is quality service to customers. There are, however, specific criteria for certain elements of the retail network (e.g., 300 books of stamps must be sold by the retailer each month in order to participate in the consignment program). USPS will continue to advise postal stakeholders (e.g., congressional staff, management associations, labor unions, and employees) of changes that affect the retail network. It is USPS’s policy to notify customers of changes that impact their services. USPS will review how best to communicate those types of changes to customers and develop a process for the field to notify headquarters of changes in operating hours that could potentially impact the community. The retail optimization initiative does not have a fixed time frame because it is an evolutionary process. As such, USPS said that “it would be impossible to provide a time frame for implementing all phases of the retail optimization initiative.” We agree that the retail optimization effort is a dynamic and evolving process, and that the actions described by USPS to improve communication with its local districts and other stakeholders about changes to the retail network are a step in the right direction. However, even though there are constant changes in the retail network, it is important for stakeholders to feel confident that USPS’s retail decisions are made in a fair, rational, and fact-based manner. Therefore, we continue to believe that establishing and communicating the criteria that provide the basis for USPS’s retail decisions would help to raise this level of confidence. Furthermore, although USPS says it has not identified a fixed time frame for its retail optimization efforts because it is an evolutionary process, this does not mean that time frames for specific projects or initiatives are not needed. Time frames can, and should, be established for the different phases of USPS’s retail initiatives to provide postal stakeholders with information on when these initiatives will be deployed so that interested parties, such as mailers, can determine the implications for their own business plans. In addition, time frames are needed so that USPS and stakeholders can evaluate the performance of these initiatives and how they fit into the network optimization plans as a whole, including the potential impact on costs and rates. We will send copies of this report to the Ranking Minority Member of the Senate Committee on Governmental Affairs, the Chairmen and Ranking Minority Members of the House Committee on Government Reform and the House Special Panel on Postal Reform and Oversight, Senator Thomas R. Carper, the Postmaster General, the Chairman of the Postal Rate Commission, and other interested parties. We will also make copies available to others on request. In addition, this report will be available at no charge on GAO's Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 2834 or at [email protected]. Key contributors to this assignment were Teresa Anderson, Joshua Bartzen, and Heather Halliwell. To meet our first objective, which was to provide information on the U.S. Postal Service’s (USPS) policies, procedures, and practices for providing rural delivery services, and how they compare with those in urban areas, we discussed USPS’s basis for providing delivery and retail services, the legal framework under which these decisions are made, and the process used to carry out these decisions with USPS officials. We supplemented this information with (1) USPS documents and manuals describing letter carrier duties and the roles of USPS officials in managing delivery services and (2) USPS operational guidance for providing service, including establishing delivery routes and locations of deliveries, retail alternatives, and services/locations of these alternatives. Also, because USPS is subject to legal and statutory considerations when making retail and delivery decisions, we reviewed the applicable statutes that establish USPS’s mission and role as a provider of universal postal service and the collective bargaining contracts established with its two sets of bargaining employees—the National Rural Letter Carriers Association and the National Association of Letter Carriers. We discussed USPS’s current policies and procedures with various USPS officials who were knowledgeable about the retail and delivery networks, as well as with letter carrier and postmaster representatives and Postal Rate Commission officials. We obtained, reviewed, and analyzed delivery and retail data pertaining to routes, delivery points, and retail network from various sources, including USPS officials, the Annual Report, and the Comprehensive Statement of Operations. We assessed the reliability of data provided by USPS by reviewing the data for inconsistencies and checking for duplicate or missing values. In those cases where we found discrepancies, we worked with USPS to address the problems. We determined that these data were sufficiently reliable for the purposes of this report. To meet the second objective, which was to discuss changes USPS is making, and planning to make, related to providing postal services to rural areas and the potential impact of these proposed changes, we reviewed, analyzed, and discussed with USPS officials actions that were planned as part of its Transformation Plan and its related updates, growth plans, operational strategies, Infrastructure and Workforce Rationalization plan, as well as the recommendations to USPS as part of the President’s Commission on the United States Postal Service (the President’s Commission) report. To meet our third objective of identifying issues that USPS may need to consider when making decisions related to providing postal services in rural areas, we interviewed various USPS officials, such as retail and delivery managers, customer contact representatives, and administrators of the Customer Satisfaction Management Survey. To gather additional information on stakeholder issues and preferences, we interviewed representatives from the letter carrier and postmaster groups, analyzed stakeholder comments raised before the President’s Commission, reviewed the Hart Study that was conducted on behalf of the commission, reviewed USPS documentation related to its planned actions, and examined newspaper reports of customer concerns about changes in delivery and retail access. Due to the current legislation proposed in both houses of Congress, we reviewed this proposed legislation as well the pertinent legislative history of congressional concerns in the delivery and retail areas. To gain an additional understanding of customer issues with USPS, we met with USPS Congressional Relations staff to gather information on the types of written customer inquiries that are sent to Members of Congress. These staff provided us with a sample of letters related to retail and delivery issues that were sent to Members of Congress who then forwarded these concerns to USPS for resolution. USPS established categories for documenting these issues (e.g., delivery service, delivery method, and retail service). We requested copies of letters in selected retail and delivery categories that in 2002 and 2003 were sent to Members of Congress who provided oversight of USPS. We analyzed these letters and established our own set of delivery and retail categories on the basis of information presented in the letters. We felt it was necessary to establish our own set of categories because some letters contained issues that were raised across categorization areas. Contract stations/ branches (and Community Post Offices) Contract stations/ branches (and Community Post Offices) The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
A key element of the ongoing postal reform deliberations before Congress is the U.S. Postal Service's (USPS) ability to carry out its mission of providing universal mail delivery and retail services at reasonable rates. Many are concerned that USPS's mission is at risk in the current operating environment of increasing competition and decreasing mail volumes. Preserving universal service, particularly in rural areas, is a goal of postal reform. GAO was asked to discuss (1) how USPS provides universal mail delivery services and access to postal services in both rural and urban areas; (2) what changes USPS is making or plans to make related to providing postal services, including changes that may affect rural areas; and (3) what are the major issues that have been raised related to how USPS provides postal services. USPS provides its customers, regardless of where they live, with services that include mail delivery at no charge and access to retail services. However, differences exist in how, when, and where USPS provides these services. These differences have always existed due to the nation's geographic diversity and changes in technology, transportation, and communications. Universal postal service is not defined by law, but appropriations legislation requires 6-day mail delivery and prohibits USPS from closing small, rural post offices. Delivery and retail decisions are made primarily by local USPS officials with overarching guidance provided by national policies and procedures. Local decisions are based on cost and service factors, including the number and location of deliveries, quality of roads, employee safety, and mail volume. USPS has taken actions, and is planning future actions, to improve the efficiency of its delivery and retail networks. Overall, customers in urban and rural areas will probably not see significant changes in delivery services since most changes are focused on operational improvements. On the retail side, USPS plans to provide more cost-effective and convenient service by developing new, low-cost alternatives; moving stamp-only transactions away from post office counters; and optimizing its retail network. USPS's retail optimization involves tailoring services to communities' needs and replacing "redundant, low-value access points with alternative access methods." It remains unclear how customers in rural areas will be affected by these retail initiatives since most are planned for high-growth, high-density areas. Generally, postal customers are satisfied with the services provided to them. The issues that have raised the greatest concerns from customers include inconsistent mail delivery and the threat of post office closings or reductions in post office hours. Also, concerns have been raised about USPS's limited communication regarding its planned changes to its networks. USPS's retail optimization could be an opportunity for USPS to reduce its costs while improving customer service. However, USPS needs to provide additional transparency and accountability mechanisms to better communicate its retail optimization plans and raise stakeholders' confidence that decisions will be made in a fair, rational, and fact-based manner.
U.S. exports as a share of U.S. gross domestic product have grown significantly, increasing from less than 6 percent in 1970 to a peak of more than 11 percent in 1997, as shown in figure 1. The rise in U.S. imports was even greater, increasing from about 5 percent in 1970 to nearly 15 percent of GDP in 2000, according to Commerce Department statistics. Although the share of U.S. exports and imports has declined from those peak levels, they still represent a substantial part of our GDP—at 9.3 percent, and 13.3 percent, respectively, in 2002. The U.S.’s principal trading partners include Canada, Mexico, Japan, and China. At least 17 federal agencies, led by USTR, are involved in developing and implementing U.S. trade policy. USTR’s role includes developing and coordinating U.S. international trade policy and leading or directing negotiations with other countries on trade matters. It also has primary statutory responsibility for monitoring and enforcing U.S. trade agreements. The Department of Commerce has a relatively broad role with respect to trade agreement activities, with three units in the International Trade Administration performing the key trade functions: The Import Administration helps enforce U.S. trade laws; Market Access and Compliance is responsible for ensuring that other nations live up to their trade agreements; and Trade Development focuses on advocacy for U.S. companies, export promotion services, support for trade negotiations, and market analysis. Trade functions at the CBP are primarily directed toward enforcing U.S. import and export laws and facilitating legitimate trade as well as collecting duties, fees, and other assessments (more than $23 billion in fiscal year 2002). Other agencies also play important roles, such as the departments of Agriculture and State, which have relatively broad roles with respect to trade agreement activities. The departments of the Treasury and Labor have more specialized roles, such as advising on financial services or labor and workers’ rights issues. Federal trade policy development and monitoring and enforcement efforts are coordinated through an interagency mechanism comprising several management- and staff-level committees and subcommittees. The number of authorized full-time staff at USTR, Commerce’s Import Administration, and Commerce’s Market Access and Compliance division has increased in recent years (see fig. 2). However, actual staff levels are still in the process of catching up with authorized levels in Commerce and USTR offices. USTR has requested additional staff resources for 2004. As of January 23, 2003, the CBP had 3,269 positions dedicated to performing trade-specific functions: 2,263 specialists, auditors, and attorneys and 1,006 associated positions carry out trade activities such as auditing trade compliance; processing entry documents; collecting duties, taxes, and fees; assessing and collecting fines and penalites for noncompliance; and advising on tariff classification issues. CBP is expected to maintain these staff levels, as the Homeland Security Act of 2002 stipulates that the Secretary of Homeland Security may not reduce the staffing levels attributable to such functions on or after the effective date of the act. In addition, more than 18,000 CBP inspectors perform trade and nontrade functions depending on the nature of their assignment. For example, inspectors may screen and inspect cargo for illegal transshipment of textiles, counterfeit cigarettes, illegal drugs, and other contraband; and enforce compliance with U.S. trade and immigration laws. After September 11, 2001, combating terrorism became the priority mission for the U.S. Customs Service and remained so when the Customs Service was transferred to the Department of Homeland Security and incorporated into CBP. While it is too soon to tell how the increased importance of security will affect the implementation of CBP’s trade- related activities in the long run, some short-term shifts in human capital from trade to nontrade functions have occurred. As part of its focus on terrorism, CBP has implemented new programs to screen high-risk containers for weapons of mass destruction at overseas ports and to improve security in the private sector’s global supply chain. CBP has made progress in getting these programs up and running but has not devised systematic human capital plans to meet long-term staffing needs for both programs. The increased importance of security requires human capital strategies that link with the goals of combating terrorism and facilitating trade to establish accountability and ensure effective performance. The historical mission of the U.S. Customs Service has been to collect customs revenues and ensure compliance with trade laws, but this mission has shifted over time. For example, in the 1970s Customs expanded its functions to include the interdiction of narcotics entering the United States. Since September 11, 2001, combating terrorism has become Customs’ priority mission, culminating in the creation of CBP on March 1, 2003. On that date, the U.S. Customs Service was transferred from the Department of the Treasury to the Department of Homeland Security as part of the Homeland Security Act of 2002. Figure 3 illustrates the range of trade and nontrade activities that CBP performs. While two of the nine key mission-related offices within CBP are primarily dedicated to trade, most offices and most of CBP’s more than 40,000 employees perform a range of activities that support both trade and nontrade goals. Moreover, about a fifth of the 3,269 CBP positions dedicated to performing trade activities are located in the trade-specific offices, but most are located in offices that support both goals. Within this kind of organization, activities performed by persons in the offices that support both goals could shift from trade to nontrade activities when security threat levels are higher without actually seeing a reduction in the number of staff dedicated to trade. Moreover, the activities of the 18,000 plus CBP inspectors who perform trade and nontrade functions could shift to focus on combating terrorism when security concerns are heightened. Several examples illustrate the types of shifts from trade to security activities that have occurred. After September 11, 2001, CBP temporarily detailed approximately 380 inspectors to international airports around the country to strengthen security measures—reducing the number of inspectors available to work on trade activities. During the first 2 quarters of fiscal year 2002, CBP audits on export compliance were not conducted so that 150 inspectors could be temporarily redeployed to land ports along the northern border to strengthen security measures. During fiscal year 2002, the Compliance Measurement program, which determines compliance with U.S. trade laws, regulations, and agreements, was temporarily discontinued for 11 months because import specialists and inspectors were redirected to border security activities. Due to the limited compliance sampling, CBP was unable to calculate an overall trade compliance rate for fiscal year 2002. Moreover, compliance measurement helps ensure the quality of trade data, and unreliable trade data increase the risk that critical threats will not be identified. In fiscal year 2003, 3 of 14 scheduled textile production verifications were canceled when the national security alert level increased, so that the verification teams could remain at their ports and field offices to focus on security-related activities. The textile production verification teams, comprised of CBP import specialists and special agents, examine the production facilities in nations where there is potential for illegal transshipment of textiles. While the Homeland Security Act stipulates that the Secretary of the Department of Homeland Security may not reduce the staffing levels attributable to specific trade-related activities, our examination found that measuring inputs such as the number of staff assigned to trade-related positions does not adequately capture possible shifts away from trade activities—as the number of people assigned to trade-related positions may remain the same, but the focus of their work may shift to nontrade duties. In addition, those positions that were not included in the legislation, such as inspectors, but conduct trade and nontrade activities, may increasingly shift their focus away from trade and concentrate on homeland security activities. Measuring changes in CBP’s outputs and outcomes will be important in assessing how the increased emphasis on combating terrorism and Customs’ transfer to the Department of Homeland Security have affected trade activities and whether human capital strategies need to be readjusted accordingly. Responding to heightened concern about national security since 9/11, CBP assumed the lead role in improving ocean container security and reducing the vulnerabilities associated with the overseas supply chain. In November 2001, CBP initiated the Customs-Trade Partnership Against Terrorism program, where companies agree to voluntarily improve the security of their supply chains in return for reducing the likelihood that their containers will be inspected for weapons of mass destruction. In January 2002, CBP also initiated the Container Security Initiative whereby CBP officials are placed at strategic foreign seaports to screen cargo manifest data for ocean containers to identify those that may hold weapons of mass destruction. We reported in July 2003 that CBP had not taken adequate steps to incorporate human capital planning, develop performance measures, or plan strategically- - factors crucial to the programs’ long-term success and accountability. Initially, 10 officials were assigned to roll out the Customs-Trade Partnership Against Terrorism. Under the program, companies enter into partnership agreements with CBP and agree to self-assess their supply chain security practices and document it in a security profile. These 10 officials provide guidance to companies on how to prepare their security profiles as well as review the completed security profiles and prepare feedback letters. As of May 2003, more than 3,300 agreements had been signed, 1,837 security profiles reviewed, and 1,105 feedback letters prepared. However, early on CBP realized that it did not have a cadre of staff with the skills necessary to conduct site visits to observe supply chain practices and make substantive recommendations for improving security. In October 2002, CBP began the process of developing a new position, called “supply chain specialists,” to review company security profiles, visit companies to validate information contained in the security profiles, and develop action plans that identify supply chain vulnerabilities and the corrective steps companies need to take. CBP was authorized to hire more than 150 supply chain specialists and expected to hire 40 supply chain specialists in fiscal year 2003. As of October 2003, CBP has visited more than 130 companies to verify their supply chain security practices. While CBP officials acknowledged the importance of human capital planning, they said they had not been able to devote resources to developing a human capital plan that outlines how the program will increase its staff 15-fold and implement program elements that require specialized training. The Container Security Initiative seeks to deploy 120-150 inspectors, intelligence research analysts, and agents to 30 overseas ports by the end of fiscal year 2004. CBP eventually plans to expand to 40 to 45 ports. Deploying four-to-five person CSI teams to foreign ports will be a complex, multiyear task. CBP seeks candidates with specialized skills needed to review cargo manifest data and identify suspicious containers for inspection as well as diplomatic and language skills to interact with their foreign counterparts. While CBP officials told us that they did not experience significant difficulties in finding qualified staff to fill their short-term human capital needs from among the pool of existing CBP employees, CBP had only 12 ports up and running under the Container Security Initiative at that time (May 2003). In addition, the teams were on 120-day temporary duty assignments; however, CBP plans to create 2- to 3- year assignments to replace the 120-day temporary duty assignments. In spite of the potential challenges CBP could face, CSI officials had not devised a systematic human capital plan. To help ensure that the Container Security Initiative and the Customs- Trade Partnership Against Terrorism achieve their objectives as they transition from smaller start-up programs to larger programs with an increasingly greater share of the Department of Homeland Security’s budget, we recommended in July 2003 that CBP develop human capital plans that clearly describe how these programs will recruit, train, and retain staff to meet their growing demands as they expand to other countries and implement new program elements. Human capital plans are particularly important given the unique operating environments and personnel requirements of the two programs. According to CBP officials, the professional and personal relationships that supply chain specialists and the Container Security Initiative teams build with their clients over time will be critical to the long-term success of both programs. For example, the success of the Customs-Trade Partnership Against Terrorism will depend, in large part, on the supply chain specialists’ ability to persuade companies to voluntarily adopt their recommendations. Similarly, a key benefit of the Container Security Initiative is the ability of CBP officials to work with foreign counterparts to obtain sensitive information that enhances their targeting of high-risk containers at foreign ports. If CBP fails to establish these good working relationships, the added value of screening manifest data at foreign ports could be called into question. In recent years, the United States has been pursuing a broad trade policy agenda whose cumulative impact has tested the limits of the government’s negotiating capacity. This agenda includes undertaking significant negotiating efforts in multilateral, regional, and bilateral arenas. The administration has characterized this effort as a strategy of “competitive liberalization.” First, the United States is actively involved in the challenging WTO round of negotiations launched in Doha, Qatar, in 2001. Second, the United States is also a co-chair in ongoing negotiations to create a Free Trade Area of the Americas. Finally, with the passage of trade promotion authority in 2002, the United States has also launched a series of bilateral and subregional free trade agreement negotiations. The increase in the number of these negotiations at the same time that major global and regional trade initiatives are under way has strained available resources. The United States is committed to completing a new round of WTO negotiations. In November 2001, the WTO, with strong backing from the United States, launched a new set of multilateral negotiations at its ministerial conference in Doha. As we reported in September 2002, the ministerial conference laid out an ambitious agenda for a broad set of new multilateral trade negotiations as described in the Doha Ministerial Declaration. The Doha mandate calls for the continuation of negotiations to liberalize trade in agriculture and services. In addition, it provides for new talks on market access for nonagricultural products and negotiations on trade and the environment, trade-related aspects of intellectual property rights, and a number of other issues. The breadth of the negotiations means that USTR will need to call on staff from a number of trade agencies to assist USTR throughout the process. USTR has also asked for additional staff to address the increased workload. Despite recent problems, WTO negotiations are likely to continue to command staff attention. Doha Round WTO negotiations are currently on hold following a breakdown at the September 2003 Ministerial Meeting in Cancun, Mexico, throwing the 2005 deadline for completion of the negotiations in doubt. After the ministerial, WTO officials initially canceled all special negotiating sessions and later called for a senior officials’ meeting by December 15, 2003. Despite these developments, however, USTR officials do not anticipate any decrease in staff workload on WTO issues because of the breadth of their ongoing WTO responsibilities and their efforts to restart the negotiations. We reported in April 2003 that, as the co-chairman, with Brazil, of the FTAA negotiations, USTR has faced a heavy expansion of its workload. Demands on USTR resources increased significantly in fall 2003, when USTR’s responsibilities as co-chair of the negotiations and host of the ministerial intensified due to preparations for the November 2003 Miami FTAA ministerial. The co-chair’s formal tasks include coordinating with Brazil on a daily basis; providing guidance and management coordination to the FTAA providing guidance to the negotiating groups and committees; and co-chairing key FTAA committees. In terms of resources, the U.S. team negotiating the FTAA—though perceived as highly capable—is small and stretched thin. Like past chairs, USTR has dedicated some staff specifically to the co-chair function, while other USTR staff work on advancing the U.S. position in the negotiations. In addition, USTR made arrangements with other agencies for temporary assistance. For example, Commerce provided a detailee who worked full time in Miami beginning in July, and State provided both foreign service officers and conference specialists to help host and conduct the ministerial. Bilateral negotiations are also applying pressure to trade agencies’ human capital resources. In addition to the WTO and the FTAA negotiations, USTR has notified Congress of its intent to pursue free trade agreements (FTA) with a number of countries and has started negotiations toward this end. The passage of trade promotion authority in 2002 gave U.S. negotiators the opportunity to pursue trade agreements with other countries under a streamlined approval process in Congress. The administration sees FTAs—some with a single country (i.e., bilateral) and others with groups of countries (i.e., subregional)—as opportunities to promote the broader U.S. trade agenda by serving as models and breaking new negotiating ground. The United States is now negotiating four FTAs and intends to pursue others soon. In late 2002, it began negotiating the Central American Free Trade Agreement with Costa Rica, El Salvador, Guatemala, Honduras, and Nicaragua; the Southern Africa Customs Union Free Trade Agreement with South Africa, Botswana, Lesotho, Namibia, and Swaziland; and FTAs with Morocco and Australia. In mid-2003, the administration also announced that it plans to negotiate FTAs with the Dominican Republic and Bahrain and in mid-November announced plans for an FTA with Panama. Thailand and Sri Lanka are also being considered as FTA partners. With the breakdown of WTO negotiations, the U.S. Trade Representative has stated that the administration will focus on FTAs with willing partners to continue making progress in trade liberalization. USTR officials acknowledge that human capital impacts are associated with conducting these FTAs. Each agreement involves a variety of different subjects, and negotiations on most of these agreements are complex. In particular, staffing constraints affect the timing of new negotiations, because staff with regional responsibilities are limited by the extent to which they can support additional negotiations. In addition, completed FTAs will require additional work to monitor compliance with the terms of the agreement. Pursuing an ambitious set of negotiations on an international, regional, and bilateral basis is having a cumulative impact on the human capital capacity of agencies that conduct trade negotiations. Since USTR’s staff size of 199 is relatively small—having been set up to coordinate policy among and draw expertise from executive branch agencies—it relies on the departments of State, Commerce, Agriculture, the Treasury, and others to provide assistance and additional issue area expertise. However, USTR officials told us that their staff are already responsible for supporting multiple negotiations. Although these officials stated that USTR has taken steps to work more efficiently with other agencies, they have nevertheless requested additional resources, as shown in figure 2, in order to face the anticipated negotiations workload. For example, a recent USTR budget request noted that current staff would not be able to handle the combination of WTO, FTAA, and FTA responsibilities required in the areas of services and investment. Shifting global forces have complicated trade agreement monitoring and enforcement efforts, thus posing human capital challenges for U.S. trade agencies. For example, we recently reported that the United States has become the most frequent defendant in WTO trade dispute resolution proceedings, particularly in the trade remedy area. As a result, U.S. agencies have had to devote substantial staff resources to handle these cases, and USTR has requested additional staff to address the upward trend in dispute settlement cases. We also reported that the U.S. economy has shifted toward services and high-technology industries, while the industry committee structure that provides advice to U.S. trade agencies has been heavily weighted toward the agriculture and manufacturing sectors. Changing the committee structure to reflect the current economy and keeping its membership current has required U.S. trade agencies to devote staff resources to this effort. Finally, we reported that China’s rapid expansion in the world economy presents U.S. trade agencies with significant human capital challenges as they strive to monitor and enforce compliance with trade agreements with China. Although the U.S. government has taken steps to address some of these new challenges, questions remain about the alignment of human capital with the rapidly growing set of responsibilities we discussed in our reports. These three examples demonstrate the kinds of shifts that occur in the trade arena and indicate the impacts that these changes can have on human capital. In each of these cases, the shifting global forces require the United States to respond, and an effective response requires a clear link between the trade agencies’ human capital strategies and the goals of the agencies in that changing environment. Shifting global forces in the trade arena can be seen in recent trends in the WTO, the principal organization that regulates international trade, as members act to monitor and enforce trade agreements. For example, the United States has become by far the most frequent defendant in WTO dispute settlement cases. Many WTO disputes in recent years have concerned its members’ use of trade remedy measures whereby members impose duties or import restrictions after determining that a domestic industry has been injured or threatened with injury by imports. As shown in figure 4, the United States was a defendant in 30 of the 64 trade remedy cases brought from 1995 through 2002, with more than half of those cases filed since January 2000. The next most frequent defendants were Argentina, which had six cases, and the European Union, a defendant in five cases. On the other hand, the United States was less active than other WTO members in filing trade remedy cases. As figure 4 shows, the European Union was the most frequent complainant in the 64 trade remedy cases, and six WTO members filed more complaints than the United States did between 1995 and 2002. U.S. officials stated that some WTO trade remedy rulings have been extremely difficult to implement. For instance, some rulings have placed a greater burden on domestic agencies to establish a clearer link between increased imports and serious injury to domestic industry. As a result, officials said they would now have to expend more resources in conducting such investigations. In addition, U.S. officials said that the rulings have required U.S. agencies to provide more detailed explanations of their analyses and procedures for applying several methodologies used in trade remedy investigations. As a result of the increased WTO dispute settlement activity, U.S. trade agencies have had to devote substantial staff resources to handle these cases. According to Commerce officials, about one-half of the Import Administration’s 36 attorneys are significantly engaged in handling WTO litigation. They said Commerce has sufficient staff to handle the current workload unless the number of dispute settlement cases increases. According to USTR, the number of WTO cases its lawyers have handled has increased dramatically—from 11 in 1995, to 53 in 1997, to 69 in 1999, and to 91 in 2002. USTR expects this trend will continue, both because more and more WTO members are making active use of the dispute settlement system but also because there are more WTO members. Although the number of USTR General Counsel staff attorneys has roughly doubled since 1995 (with 13 new positions added in fiscal year 2001), the lawyers that were added are more than fully occupied with the current workload, USTR said. As a result, USTR has requested another monitoring and enforcement attorney for fiscal year 2004 to handle the increasing dispute settlement work. WTO trade remedy rulings and the broader set of proceedings within the WTO are an important component of the international set of obligations and agreements to which the United States is a party. Our review found that the United States has become a focus of complaints in trade remedy cases, and U.S. agencies stated that some of the rulings on these cases have important implications for the future, including potential workforce implications. This situation requires trade agencies to maintain human capital strategies that anticipate and respond quickly to any changes. Doing so would allow them to allocate staff accordingly to keep the trade functions current and relevant. The changing structure of the U.S. economy has required a strategic realignment of some trade functions. For example, the trade policy advisory committee system performs an important function through which private sector committee members are able to provide input to trade agencies to help them negotiate, monitor, and enforce trade agreements; however, our September 2002 report found that the structure and composition of the trade advisory committee system had not been fully updated to reflect changes in the U.S. economy and U.S. trade policy. Although the U.S. economy had shifted toward services and high- technology industries since the 1970s, their representation on the trade advisory committees had not kept pace with their growing importance to U.S. output and trade. For example, certain manufacturing sectors, such as electronics, had fewer members than their sizable trade would have indicated. In other cases, U.S. negotiators reported that some key issues in negotiations, such as investment, were not adequately covered within the trade advisory committee system. In addition, committee rosters were only about 50 percent of their authorized levels, and some large companies did not participate, limiting the availability of advice for negotiators from certain committees. Our 2002 report also found that the resources USTR and the other trade agencies devoted to managing the trade advisory committee system did not match the tasks that needed to be accomplished to keep the system running reliably and well. For example, USTR officials told us that the current staffing levels in its responsible office—three positions with multiple responsibilities—did not allow them time to proactively manage committee operations. The head of the office said that simply restarting all the lapsed committees and keeping the rest of the system operating were occupying much of the time she could devote to the system. Commerce, which co-administers many of the trade advisory committees, faced similar challenges. As discussed in our September 2002 report, Commerce officials said they had to focus their limited staff—an office of three persons—on rechartering the committees and appointment processes, which did not allow them to meet their responsibilities to attend all the committee meetings. We recommended that USTR work with Commerce and several other agencies to update the trade advisory system to make it more relevant to the U.S. economy and trade policy needs as well as to better match agency resources to the tasks associated with managing the system. According to recent information that agencies provided, their staff have planned and, in some cases, already taken a number of actions in response to our 2002 recommendations that they expect will increase efficiencies and reduce the workload. For example, Commerce and USTR have developed a plan for restructuring the industry advisory committees that officials believe better reflects the U.S. economy. Under the plan, some new committees are to be established, while the overall number of committees is to be reduced. The latter action is expected to reduce the administrative workload for Commerce’s staff, enabling them to focus more on substantive matters. The plan also calls for quarterly plenary meetings that will be open to all trade advisors. According to Commerce officials, bringing all advisors together at the same time will facilitate a higher level of representation of U.S. trade negotiators at the meetings and that, as a result, trade advisors will be better informed about ongoing negotiations. In turn, the officials said, trade advisors should be better prepared to deliberate on issues of interest to them and thus better able to provide advice to U.S. trade negotiators. In addition, the agencies revised their process for clearing proposed new members, thus reducing the amount of time it takes for clearance. Moreover, a secure Web site has been established that allows members to review the texts of draft trade negotiating documents. In addition, the Assistant U.S. Trade Representative for Public Liaison now holds a monthly teleconference with the chairmen of all committees. During this call, USTR provides feedback to committees on previously raised areas of concern or recommendations, discusses USTR’s long-term negotiating calendar to highlight upcoming issues, and is open to discussion of general issues or concerns. According to Commerce and USTR officials, they have taken these actions without increasing the size of their authorized staffs. However, it was noted that Commerce staff, who did much of the implementing work on this issue, sometimes put in long hours in completing their tasks. In addition, in the case of Commerce, a position that had been vacant was filled, thus increasing the actual number of staff. While administering the trade advisory committee system is only one of many functions that trade agencies perform, the system does provide an important forum for candid discussion of trade negotiating topics with a wide range of private sector experts. Our review found that the system has not realized its potential, however, and that lack of administrative support was one of the reasons for this situation. While the agencies have taken actions to improve the trade advisory committee structure and its management, these kinds of improvements illustrate how U.S. trade agencies need to utilize human capital strategies that anticipate and respond to shifts in global market forces. Such an effort would allow the agencies to allocate staff accordingly to keep trade functions current and relevant. China’s rapid expansion in the world economy presents U.S. trade agencies with significant human capital challenges as they strive to monitor and enforce compliance with trade agreements. In 2002, China was the United State’s fourth largest trading partner. The rapid growth of China’s exports to the United States and the continuing role of the government in China’s economy create a significant challenge for U.S. agencies and the Congress to ensure that U.S. businesses are treated fairly. Since China’s entry into the WTO on December 11, 2001, U.S. agencies have taken significant actions to monitor and enforce an extensive and complex set of WTO commitments. Among these actions are increasing staff resources, establishing an interagency group to focus on China trade issues, and considering organizational changes to better concentrate analytical staff resources. However, early experiences with monitoring China’s compliance with numerous and complex commitments and with WTO and U.S. government mechanisms for enforcing commitments illustrate just how difficult and resource intensive—particularly in terms of human capital—this task will be. U.S. trade with China has been characterized by a rapidly growing deficit, with a significant impact on a number of industries in the United States. As figure 5 shows, U.S. imports from China have grown rapidly since 1989, while U.S. exports to China have also expanded, but at a much slower rate. The growing trade deficit has been addressed at several congressional hearings and may require greater attention from Commerce, USTR, and other trade agencies. In 2002, imports from China totaled nearly $125 billion, accounting for nearly 11 percent of total U.S. imports and making China the third largest supplier of U.S. imports, after Canada and Mexico, respectively. The top five U.S. imports from China are shown in table 2 (see the app.). China was the seventh largest market for U.S. exports in 2002, and U.S. exports totaled about $21 billion or 3.2 percent of total U.S. exports to the world (see table 3 in the appendix). China has made important progress during the past 25 years in opening its market to foreign goods and services as well as foreign investment, according to a USTR report. Economic and financial reforms have introduced market forces into China, and privileges accorded state-owned firms are gradually being removed. However, the transition from a state- controlled economy to a market-driven one is far from complete. According to USTR, reforms have been particularly difficult in sectors that traditionally relied upon substantial state subsidies as the central government continues to protect noncompetitive or emerging sectors of the economy from foreign competition. Moreover, USTR said, provincial and lower-level governments have strongly resisted reforms that would eliminate sheltered markets for local enterprises or reduce jobs and revenues in their jurisdictions, inhibiting the central government’s ability to implement trade reforms. During 2003, the Commerce Department held more than 20 roundtable discussions with U.S. manufacturers, both large and small, across the United States and heard similar complaints. According to Commerce’s under secretary for the International Trade Administration, no foreign country raised more attention as a source of concern than China. Manufacturers complained about rampant piracy of intellectual property, forced transfer of technology from firms launching joint ventures in China, a broad range of trade barriers, and capital markets that are largely insulated from free-market pressures. Another issue concerns the Chinese government’s decade-long practice of pegging the Chinese yuan to the dollar as a means, according to Chinese officials, of fostering economic stability, the absence of which could hurt its export industries and political stability. In order to maintain this fixed exchange rate, the government has had to intervene in the foreign exchange market and, according to Treasury officials, recently intervened very heavily to prevent the yuan from appreciating against the dollar. Considerable debate has occurred among experts and observers about whether China’s intervention to maintain a lower-valued yuan is having a negative effect on U.S. manufacturers. This issue has been the subject of numerous congressional hearings with administration witnesses and was also a topic of discussion between Presidents Bush and Hu Jintao at the October 2003 Asia Pacific Economic Cooperation Economic Leaders’ Meeting and during the Secretary of the Treasury’s September 2003 trip to China. Also in September, the Group of Seven finance ministers issued a statement favoring more flexibility in exchange rates for large economies. In an October 30, 2003, report to the Congress, the Treasury Department concluded that no major trading partner of the United States was manipulating the rate of exchange between its currency and the U.S. dollar for the purposes of preventing effective balance of payments adjustments or gaining unfair competitive advantage in international trade. However, the report also found that China’s fixed exchange rate was not appropriate for a major economy like China and should be changed. According to the Treasury, the Chinese government has indicated it will move to a flexible exchange rate regime but believes taking immediate action would harm its banking system and overall economy. The growing importance of the Chinese economy for the United States has been a particular focus of attention from U.S. officials due to the implications for U.S. firms and for compliance with trade agreements. However, these issues require increasing attention from U.S. agency personnel. Moreover, as in the case of the debate surrounding the Chinese currency, these issues require appropriate expertise from U.S. trade and economic agencies, and a resolution of these matters may ultimately require a significant investment of time from these officials. As we reported in October 2002, China’s WTO commitments span eight broad areas and require both general pledges and specific actions. We identified nearly 700 individual commitments on how China is expected to reform its trade regime, as well as commitments that liberalize market access for more than 7,000 goods and nine broad service sectors in industries important to the United States, such as automobiles and information technology. Owing to the breadth and complexity of China’s commitments, China’s accession to the WTO has led to increased monitoring and enforcement responsibilities for the U.S. government. An illustration of the human capital difficulties involved in monitoring and enforcing China’s commitments relates to U.S. government efforts to establish an interagency group—the China WTO Compliance Subcommittee—whose mandate is to monitor China and the extent to which it is complying with its WTO commitments. Almost 40 officials, representing 14 departments and executive offices, participate in this subcommittee. The subcommittee was very active in 2002, meeting 11 times. In these meetings, officials evaluated and prioritized current monitoring activities, reviewed the steps that China has taken to implement its commitments, and decided on appropriate responses. Also, the subcommittee held a public hearing on September 18, 2002, and USTR issued its first annual report to Congress on China’s WTO compliance on December 11, 2002, as required by law. Still, it took some time for the subcommittee to get up to full speed. For example, the various participants had to work out their respective roles and responsibilities. USTR officials sought to delineate tasks related to carrying out their monitoring action plan in China; Washington, D.C.; and Geneva (the WTO’s headquarters), including expectations for information gathering, reporting, and setting initial priorities. Finally, USTR officials undertook several activities at the beginning of the year to educate themselves on China’s WTO obligations. This was important, because monitoring these obligations entailed new or expanded responsibilities for officials in the field, and many of the Washington-based officials were relatively new to their current jobs. For example, many of the USTR officials who had actively participated in the U.S. negotiations with China that resulted in those obligations changed jobs and/or left the government soon after China became a WTO member in 2001. USTR, Commerce, and other agencies have requested and received additional resources to carry out the added responsibilities arising from China’s accession to the WTO. For example, full-time equivalent staff in key units that are involved in China monitoring and enforcement activities across four key agencies increased from about 28 to 53 from fiscal year 2000 to 2002, based on agency officials’ estimates (see table 1). Commerce had the largest overall increase in staff devoted to China WTO compliance during this period. Specifically, staffing levels in Commerce’s Market Access and Compliance division increased from 7 to 22 between fiscal years 2000 and 2002. Additionally, Commerce’s Import Administration, which takes the lead on monitoring China’s commitments concerning subsidies and unfair trade practices, also significantly increased staff dedicated to China compliance activities during the same time period. Commerce has also increased the number of staff involved in the agency’s compliance efforts on the ground in China by creating a Trade Facilitation Office within the U.S. embassy in Beijing. In addition, the Department of Agriculture has increased the number of overseas staff involved in the agency’s China WTO compliance activities. A Commerce official told us that the Import Administration is thinking of combining all of its China work under one deputy assistant secretary (the current practice is to distribute the work among three deputy assistant secretaries). Doing so might enhance the office’s expertise and provide a better basis for assessing whether additional China expertise is needed. As we have reported in numerous studies and testimonies before this Subcommittee and others, effective alignment between federal agencies’ human capital approaches and their current and emerging strategic and programmatic goals is critical to the ability of agencies to economically, efficiently, and effectively perform their missions. The importance of such a close alignment is demonstrated in the area of the U.S. government’s trade activities, where heightened security concerns, an ambitious trade negotiating agenda, and an array of global economic forces all have implications for sound human capital management. Our testimony has cited illustrations in these three areas based on our recent work for Congress. In some areas, failure to sufficiently plan the human capital approach, such as the CBP programs to secure the global supply chain, show that the success of the programs is not assured in the absence of human capital planning. In other cases, such as the U.S.’s ambitious trade negotiating agenda, human capital resources may be a constraint on the ability of the trade agencies to carry out their negotiations at the multilateral, regional, bilateral, and subregional level. Finally, the array of shifting global forces described in some of our recent studies also demonstrates the implications for U.S. trade agency activities and, in many cases, the agencies’ human capital activities. For example, in the case of the rapid growth of China in the world economy and its WTO accession agreement, the demand for specialized expertise and focus on issues related to China’s economy have led to growth in personnel and efforts to reorganize to meet these new monitoring and compliance challenges. As your Subcommittee has stressed in its guidance and hearings regarding other parts of the federal government, agencies must constantly reevaluate their human capital strategies to adapt and even anticipate major shifts in their environment. We believe that a number of studies we have performed for Congress in recent months are good illustrations—and further evidence—of the validity of that approach. Mr. Chairman and members of the Subcommittee, this concludes my prepared statement. I will be pleased to answer any questions you or other members of the Subcommittee may have at this time. For further contacts regarding this testimony, please call Loren Yager at (202) 512-4347 or Christine Broderick (415) 904-2240. Individuals making key contributions to this testimony included Adam Cowles, Etana Finkler, Kim Frankena, Wayne Ferris, Rona Mendelsohn, Anthony Moran, and Richard Seldin. This appendix provides information on U.S. imports from and exports to China during the past 14 years. Table 2 provides data on the top five U.S. imports from China between 1989 and 2002. Together, imports of these five commodities accounted for about 59 percent of total imports from China in 2002, according to the Department of Commerce. Table 3 provides figures on the top five U.S. exports to China between 1989 and 2002. Together, these five commodities accounted for about 42 percent of total U.S. exports to China in 2002.
Recent developments in global trade have created human capital challenges for U.S. trade agencies. At least 17 federal agencies, with the Office of the U.S. Trade Representative (USTR) as the lead, negotiate, monitor, or enforce trade agreements and laws. These agencies' strategies for effectively aligning their current and emerging needs in handling international trade functions and their human capital resources are critical to improving agency performance. GAO was asked to summarize its recent studies to illustrate important human capital challenges arising from current trade developments as U.S. trade agencies strive to negotiate, monitor, and enforce existing trade agreements and laws. For this testimony, GAO discussed the challenges that USTR, the Commerce Department, and the Bureau of Customs and Border Protection are facing in light of three recent developments in international trade: (1) the increased importance of security, (2) the ambitious U.S. negotiating agenda, and (3) the shifting global trade environment. The importance of international trade to the U.S. economy has grown in the last decade, as have the responsibilities of federal agencies involved in implementing international trade functions. For example, the September 11, 2001, terrorist attacks have heightened the need for increased focus on security within the global trade environment. In response, the Bureau of Customs and Border Protection has implemented new programs to improve the security of the global supply chain. These new programs require greater attention to human capital strategies to ensure that they achieve their goals of facilitating trade while preventing terrorist acts. In addition, the administration has continued to pursue multilateral negotiations within the World Trade Organization and with the Free Trade Area of the Americas countries as well as a series of new, bilateral and subregional trade negotiations. The increase in the number of initiatives has strained available human capital, leading to a USTR request for additional staff. Finally, the shifting global trade environment has complicated efforts to monitor and enforce trade agreements. For example, the United States has become the most frequent defendant in World Trade Organization trade dispute proceedings. Furthermore, as the U.S. economy has shifted toward services and high-tech industries, the industry advisory committees that provide trade advice to the U.S. government have required structural realignment to reflect these changes. Also, China's growing influence in international trade has resulted in new challenges to its trading partners. These changing global forces require U.S. trade agencies to continuously ensure that their human capital strategies closely link to the nation's strategic trade functions.
The original U.N. Headquarters complex, located in New York City, was considered among the most modern facilities when it was constructed between 1949 and 1952. The United States financed construction of the original complex—the General Assembly, Secretariat, and Conference Building—by providing the United Nations with a no-interest loan equivalent to about $420 million in 2003 dollars. The rest of the complex— the Dag Hammarskjöld Library, the underground North Lawn Extension, South Annex, and Unitar Building—was built between 1960 and 1982 and was funded through the U.N. regular budget or private donations (see fig. 1). Currently, the complex accommodates the needs of 191 U.N. member countries and approximately 4,700 U.N. staff. However, the U.N. buildings do not conform to current safety, fire, and building codes and do not meet U.N. technology or security requirements. The United Nations estimates it would cost more than $2 billion over 25 years for repairs and system replacements in the absence of a major renovation. In June 2001, we reported that the Secretary-General’s first Capital Master Plan had defined the need for renovation, established the Secretary- General’s expectations for the project, and provided options for a multiyear effort to renovate the headquarters. The General Assembly reviewed the plan and approved $8 million to further develop the conceptual designs and associated cost estimates for the renovation. The General Assembly agreed with the Secretary-General’s assumptions, which provided the framework for the renovation planning. These assumptions included the following: The headquarters complex would remain at its current location in New York. The complex should be energy efficient, free of hazardous materials, and compliant with host city building, fire, and safety codes. The complex should meet all reasonable security requirements. Disruption to the work of the United Nations should be kept to a minimum. In August 2002, the Secretary-General presented the General Assembly with a more detailed Capital Master Plan and endorsed a renovation approach that included the temporary relocation of most U.N. staff and delegates to “swing space” in a proposed new building (see fig. 2 for the swing space location and app. II for more information on the renovation approach). In December 2002, the General Assembly adopted a resolution endorsing the renovation approach and approved $25.5 million for detailed designs and cost estimates to be developed in 2003. The General Assembly also approved $26 million to complete the design process in 2004–2005. The General Assembly does not plan to make a final decision on whether to proceed with the renovation until financing is secured. In developing the renovation conceptual plan and cost estimate, U.N. officials, their architect-engineering firm, and subconsultants followed a reasonable planning process that was consistent with leading practices. In addition, U.N. officials and their security subconsultant followed a process consistent with recognized guidelines to develop plans for improving security at the U.N. complex. The United Nations is still in the early planning stages of the project—the first phase of a five-phase process. For this reason, changes to the scope and cost of the proposed renovation are to be expected. The overall U.N. process to develop a conceptual plan followed leading facility acquisition practices. Competitively procured an architect-engineering firm. U.N. renovation officials used a competitive process to procure the services of an architect-engineering firm (see app. III for the names of the firms involved). The United Nations received 15 responses to its request for proposals from firms representing six different countries. In 2001, the United Nations selected and hired an architect-engineering firm to prepare a comprehensive renovation design concept and cost analysis. The architect-engineering firm used subconsultants with recognized expertise in construction disciplines such as cost estimation, security, and structural engineering. Obtained assessments of the complex’s condition. The architect- engineering firm and subconsultants reviewed condition assessments conducted in 1998 and performed additional inspections and assessments of the complex’s condition as needed. For example, they completed a new assessment of the Secretariat Building’s deteriorating window structure. U.N. officials subsequently concluded that it was more cost effective to replace the window structure than to renovate it, as had been previously planned. Retained firm to review the renovation conceptual plan. U.N. officials retained the services of a consulting engineer to assist them in reviewing the conceptual planning reports and recommendations. Involved U.N. managers in the planning. U.N. officials involved facility managers, such as those responsible for building and program management, security, and information technology, in the planning process to ensure that the renovation would meet their needs. The managers were asked to verify the conditions and problems identified by the architect-engineering firm and subconsultants and comment on whether the proposals would address their needs. To develop a preliminary cost estimate, U.N. officials and the cost estimating subconsultant followed industry best practices established by the Construction Industry Institute. Defined the scope of the project and work plan, including responsibilities, schedule, and project budget. U.N. officials identified the building improvements that were to be included in the project scope: replacing heating, air conditioning, and electrical systems; refurbishing the window structure on the Secretariat Building; enhancing security measures; and modernizing communication and technology capabilities. The U.N. contract with the architect-engineering firm established the schedule for the cost estimating subconsultant to submit three cost estimates for approval. To compare the renovation approach budgets, U.N. officials also instructed the cost estimating subconsultant to develop one renovation approach within the budget parameters of the 2000 Capital Master Plan. Standardized the cost estimate format. U.N. officials used a standardized cost estimate format, including elements such as professional fees, labor and material costs, design and construction contingencies, and escalation costs to account for inflation. The standardized format enables U.N. officials to compare current and future cost estimates as the project progresses through the design process. Reviewed and checked cost estimate. U.N. officials reviewed the cost estimate to ensure that the conceptual planning estimates were within acceptable cost parameters. For the final review, U.N. officials hired a cost estimating consultant to peer review the cost estimate. While the peer reviewer’s assumptions were more conservative than the subconsultant’s assumptions, the peer reviewer’s cost estimate was within 5 percent of the subconsultant’s cost estimate. Based on the peer review, U.N. officials adjusted the cost estimate. Documented and reported the final cost estimate and range of accuracy. The cost estimating subconsultant delivered the final cost estimate, including contingencies that are meant to reflect the accuracy of the estimate, to U.N. renovation officials for the 2002 Capital Master Plan in August 2002. Consistent with industry practices, the subconsultant added a design contingency to allow for changes that typically occur during the design process. The subconsultant also added a construction contingency to allow for unforeseen or unknown costs. For example, structural conditions hidden by current construction may conflict with planned renovations and require contract changes. After the terrorist attacks of September 11, 2001, the United Nations enhanced security at the U.N. complex and added security measures to the Capital Master Plan (see app. IV for further information). U.N. officials and the U.N. security subconsultant identified the additional security measures through a process consistent with recognized security risk management guidelines. We have previously reported on these guidelines, which members of the U.S. intelligence and defense community follow and can provide a sound foundation for effective security. Identified the assets to be protected and the impact of their potential loss. The security subconsultant identified assets at the United Nations to be protected, such as the buildings and the perimeter. The United Nations also evaluated the importance of each asset, the potential impact of its loss, and the methods to maintain operations if the assets were lost or damaged. Identified threats to those assets. U.N. security officials consulted with relevant federal and local U.S. officials to assess changing threats to the United Nations. According to U.N. officials and the security subconsultant, they designed the security initiative in the Capital Master Plan to address these threat levels. Identified vulnerabilities. The security subconsultant reviewed five previous vulnerability assessments and conducted their own assessment of the entire complex to verify vulnerabilities and identify needed security upgrades. Assessed risks (potential for loss or damage) and determined priorities. Following security guidelines from the U.S. Interagency Security Committee, the security subconsultant developed a risk assessment that reflected its analysis of the threats to the U.N. complex and its vulnerabilities. Based on the risk assessment, U.N. officials then prioritized the security needs of the complex. According to an expert from the Interagency Security Committee, the risk assessment process used to develop the planned security upgrades was reasonable based on the consultant’s report to the United Nations. Identified countermeasures that mitigate risks. The security subconsultant used the risk assessment to identify and recommend more than 100 security measures for the Capital Master Plan. U.N. officials organized these security upgrades into two components—those in the baseline scope of the Capital Master Plan and those in a package of options. The security risk management guidelines are not a rigid set of procedures, but rather recognized steps to ensure that critical issues are considered when designing a security program. Additionally, U.N. security officials sought peer review input from other U.N. departments and public and private sector security experts when designing the security program. According to security officials from the Departments of State, Defense, and Energy and the General Services Administration, the U.N. process for developing the security initiatives in the Capital Master Plan was reasonable. U.N. officials have completed only the first phase of the renovation process by developing a conceptual plan for the proposed renovation. While U.N. planning efforts for the renovation have been reasonable so far, many decisions that can affect the project scope, schedule, and cost have yet to be made. For example, the General Assembly must decide whether it wants to include certain options that were proposed in the 2002 Capital Master Plan, such as installing extra back-up generators beyond those required by current building codes. Events outside U.N. officials’ influence, such as the availability of construction materials and labor, may also change the scope, schedule, and cost. In addition, the preliminary cost estimate is likely to change as the design phase progresses and decisions affecting the project’s scope are made. Construction Industry Institute research suggests that the final cost of a project may vary by plus or minus 30 to 50 percent of the estimated cost at this early phase of a project. While the United Nations has completed the conceptual planning phase, there are four remaining phases that renovation projects undergo, based on typical best practices in the design and construction industry (see fig. 3). Conceptual Planning—Various feasibility studies are typically conducted to define the scope of work based on owner expectations for performance, quality, cost, and schedule. The need for temporary space and the options for meeting this need are identified. Several alternative design solutions are identified, and one approach is selected. Design—The design matures into final construction documents comprising the drawings and specifications from which bids can be solicited. Estimated cost and schedule issues receive increasingly intense oversight as this phase proceeds. The project scope defined at this phase will greatly determine the cost of the project. In addition, the cost of scope changes made after the design phase are higher. Procurement—This phase refers to owner procurement of long lead- time equipment, such as unique or large electrical or mechanical equipment. Delays in the delivery of this equipment could affect the phasing and sequence of construction work and potentially cause delays. Construction—To execute the design, the services of a competitively procured construction contractor and specialty contractors and consultants are employed. The biggest challenge is the management of changes from the owner, design problems, or unknown conditions on the site. Construction is considered complete when the owner accepts occupancy of the building; however, work may continue for some time to identify and correct deficiencies in the construction work. Start-up—Start-up begins with occupancy of the building and entails the testing of individual and systems components to measure and compare their performance against the original design criteria. The Secretary-General has indicated that the United Nations anticipates that the United States would provide a no-interest loan to finance the U.N. renovation. Should the United States agree to finance the renovation in this manner, we estimate that the financial impact of the renovation to the federal government would be over $700 million. This amount would vary depending on the terms and conditions of the financing arrangement. In addition, we estimate that over a 30-year period, the federal government would not realize tax receipts of as much as $108 million (2003 present value dollars) on the federally tax-exempt bonds that would finance construction of the proposed swing space. The U.N. Development Corporation is seeking federal tax exempt status for the bonds it plans to issue to finance the swing space building. We estimate the potential financial impact to the federal government as both lender to United Nations and member state would be over $700 million for a $1.2 billion no-interest loan. As a lender of a subsidized loan to the United Nations, the federal government would forego future interest payments and assume the risk of a potential U.N. default on the loan. The estimated financial impact to the federal government of a no-interest loan for $1,193 million (repayable over 25 years) would be about $563 million for the interest subsidy to cover foregone interest payments and $28 million for the default subsidy that covers the risk of a potential U.N. default (see table 1). If the United States provided a subsidized loan with interest rates of 1 percent or 2 percent, the federal government would provide an interest subsidy of $443 million and $322 million, respectively. If the United States agrees to finance the renovation, Congress would be asked to appropriate the interest and default subsidies before the loan is made, as provided for under U.S. credit reform law. As a member of the United Nations, the United States may also be assessed an additional amount to repay the loan principal. We estimate that the net present value of the U.S. assessment for the principal repayments made over a 25-year period would be $126 million. These repayments would need to be appropriated yearly. In estimating the financial impact to the federal government, we made several assumptions. We assumed that the federal government would disburse funds to the United Nations as a line of credit rather than a lump- sum payment. The federal government would disburse the funds each year over a 5-year construction period. To model the size of the disbursements, we used the latest U.N. estimates of the funds it would need each year to pay its contractors during the renovation. We assumed that the United Nations would repay the loan over the subsequent 25 years in equal semiannual payments based on an additional assessment of member states. Since the United States is assessed 22 percent of U.N. operating costs, we assumed the federal government would repay 22 percent of the loan principal. However, because the United States does not currently allow its U.N. assessments to go toward interest payments on U.N. external borrowing, we assumed that the federal government would not repay any of the interest on a 1 percent or 2 percent loan. Finally, we used the U.N. preliminary cost estimate of $1,193 million from the 2002 Capital Master Plan for the renovation, which includes scope options that the United Nations has yet to decide on. The federal government would also not realize tax receipts if the U.N. Development Corporation is granted tax exempt status for its construction bonds. We estimate that the unrealized tax receipts over 30 years could be as high as $108 million in 2003 present value dollars. This estimate assumes that the U.N. Development Corporation would issue bonds for $350 million—the estimated construction cost for the swing space. We also assumed that without the tax exemption, the bonds would earn 6.4 percent interest and the average marginal tax rate would be 31 percent. According to corporation officials, the corporation would pay a higher interest rate on the bonds if it could not secure a tax exemption. The higher interest rate would raise the cost of financing the construction, which the corporation would then pass on to the United Nations in higher lease costs. Corporation officials stated that the United Nations could not afford the lease under its current operating budget without the tax exemption. To continue the planning process, key efforts must be pursued and critical milestones met. Given the General Assembly’s decision in December 2002 to proceed with design, the United Nations is seeking a financing commitment from the United States for the renovation. Neither the United States nor the United Nations have specified the nature of a financing commitment, according to U.S. and U.N. officials. Once an acceptable commitment is secured, the General Assembly will decide whether to proceed with the renovation, and the United Nations will be able to sign a lease with the U.N. Development Corporation. The corporation is also working to resolve a number of issues before it can begin construction on the swing space building in 2004. Figure 4 shows that securing a financing commitment is the next milestone in the renovation process. The Secretary-General anticipates that the United States will offer a no-interest loan to finance the renovation. For the United Nations to remain on its current renovation schedule, the United States would have to make a commitment to finance the renovation by October 2003. However, U.S. and U.N. officials stated that neither the United States nor the United Nations have specified the nature of a financing commitment. According to U.N. officials, the General Assembly will not make a decision to move forward with the renovation or sign a lease for the proposed swing space building without a financing commitment. According to U.N. Development Corporation officials, they will not begin construction on the proposed swing space building until the United Nations signs a lease. The corporation needs a signed lease before it can issue bonds to finance the construction of the swing space building. For the renovation project to stay on schedule, the proposed swing space building would have to be available for occupancy in early 2006. The U.N. Development Corporation must resolve two key issues in 2003 for the swing space to be available to the United Nations in 2006. First, the U.N. Development Corporation is seeking to obtain state and city approval to secure ownership of the proposed swing space site by the end of 2003 (see fig. 5). According to corporation officials, New York state approval is necessary because the site is currently part of a city park and lies outside of the corporation’s development zone. Corporation officials also said they are currently working to obtain support within the local community, which has expressed concerns about the loss of the park space. To compensate the community, the corporation proposes to build a bike path along the East River and the U.N. complex. However, according to corporation officials, as of April 2003, no agreement had been reached. Once the issues with the community group are resolved, the corporation must seek New York state legislation by June 2003 to add the proposed construction site to its development district, according to corporation officials. The corporation will then seek New York City approval of its plans for the site. City officials have expressed support for the swing space construction. Second, the corporation is seeking a federal tax exemption for the bonds it would issue to finance the swing space construction in early 2004. According to corporation officials, without a tax exemption, the annual lease cost to the United Nations could increase substantially, thereby making the project economically unfeasible. Under the 1986 Tax Reform Act, the U.N. Development Corporation and similar organizations lost the ability to issue bonds that are exempt from federal taxes. Corporation officials stated that they are working with members of Congress to introduce legislation that would restore a tax exemption for the swing space construction. As the project moves into the critical design phase, the United Nations has begun the process to hire a consultant who will manage and oversee the final design and eventual renovation of the U.N. complex. These initial efforts are important as they lay the foundation for the project management plan to ensure that the project’s scope, schedule, and costs are effectively controlled. In addition, U.N. oversight bodies anticipate additional resources and are developing audit plans to conduct oversight of the renovation project. The Department of State and the U.S. Mission to the United Nations have also initiated efforts to monitor the project. The United States has a substantial interest in monitoring the project, particularly if the United States agrees to finance it. A well-defined project management plan and adequate project management staff will be essential for the United Nations to successfully complete the renovation on time and within budget. U.N. officials recognize the need and importance of a project management plan and adequate staff to implement the plan. In January 2002, the United Nations hired a project management consultant to help develop a broad framework for a project management plan. The consultant noted that once the United Nations establishes a project management team, it will need to develop its project management plan with detailed procedures. The consultant provided best practices recommendations for creating a project management plan to control costs and effectively implement the renovation. As of January 2003, the United Nations had started the process to hire a consultant to provide project management services, including developing the project management plan and then supporting the United Nations in managing the project through the design phase. Based on the U.N. project management consultant’s report and Construction Industry Institute research, an effective project management plan will help the United Nations control costs and schedule. An effective project management plan includes three key elements. First, a clearly defined scope of work that remains relatively stable will provide the basis for project decisions. The scope should clearly define the project content and parameters, schedules, milestones for execution, budgets, and expected project outcomes. Second, policies and procedures that effectively manage scope and construction changes are important. These policies and procedures should provide a means for analyzing and documenting the reasons for changes and the implications of changes on cost, schedule, and quality. Third, timely and accurate progress reports on scope, cost, and schedule are important as a means of informing all relevant parties and coordinating changes. Regular reporting would identify key project issues that require discussion and impending issues that require resolution. While the United Nations recognizes the significance of developing a project management plan, it is important that the United Nations continues to incorporate best practices to ensure the plan’s effectiveness. Project management staff are essential to controlling schedule and cost changes because they will guide decision making and coordinate resources throughout the project. Project management staff would represent the United Nations as the owner of the project and facilitate coordination and communication between the design firms and construction contractors. The United Nations does not currently have sufficient staff to manage the project effectively but, according to U.N. officials, plans to hire additional staff and/or contractors. The United Nations added seven staff to its Capital Master Plan team during the conceptual planning phase, including two security officials, and plans to augment its management capability during design. In February 2003, the United Nations appointed an Assistant Secretary-General as the full-time executive director of the Capital Master Plan management project. As of March 2003, the United Nations had 12 people on the renovation project management staff. The United Nations is evaluating options for acquiring additional expertise and anticipates having a management team of 20 staff and contractors during the design phase and a team of about 40 staff at the peak of construction. In a February 2003 resolution, the General Assembly stressed the importance of oversight in implementing the Capital Master Plan and requested the Board of Auditors and all relevant oversight bodies, such as the Office of Internal Oversight Services, to initiate immediate oversight activities. In our last report, we noted that the Office of Internal Oversight Services did not have the expertise to perform an oversight role, but the office had agreed to assume such responsibility by hiring people with the necessary skills. Since then, the office has assigned one staff member to begin researching the Capital Master Plan on a part-time basis. However, the office has not developed a plan detailing the oversight functions it plans to pursue or hired additional staff. Officials from the oversight office stated that it has requested funding so that it can hire contractors to help evaluate the Capital Master Plan, the project management plan, and the security upgrades. The officials anticipate that these contractors would have architectural and construction skills and knowledge of New York City building codes. The Board of Auditors had not yet conducted oversight of the Capital Master Plan but plans to complete an audit strategy by June 2003. The board has decided to review financial accounts, compliance with U.N. procurement regulations, and the effectiveness of Capital Master Plan management. After the board completes its audit strategy, it plans to determine the additional resources and expertise it needs to conduct oversight of the renovation. According to a board official, the United Nations approved initial funding of $35,000 in April 2003 to cover the audit of Capital Master Plan activities during 2003. However, the board will require additional resources for oversight as the renovation progresses. The United States has a substantial interest in the renovation project and its costs, particularly since the Secretary-General anticipates U.S. financing. As the project goes forward, the United States will decide whether to finance the renovation and will take part in other key decisions. The Department of State, the lead foreign affairs agency responsible for developing and implementing U.S. policy toward the United Nations, has assembled a task force to monitor U.N. implementation of the Capital Master Plan. While Department of State officials have consulted with other U.S. government officials concerning the renovation project, they have not yet created a formal framework that defines the task force’s mission and program goals. In addition, a department official stated that they do not have the expertise to undertake effective monitoring of the project as it progresses. In our June 2001 report, we recommended that the Department of State develop a comprehensive U.S. position on matters pertaining to the renovation. We further recommended that if the United States were to take a position in support of the renovation, the department should consider obtaining expertise in construction management and financing. In June 2001, the department took a position in support of renovating the U.N. headquarters complex and created an interagency task force to monitor the renovation project in August 2002. The task force consists of six officials from the department and the Office of Management and Budget who work part-time on task force activities and a point-person at the U.S. Mission to the United Nations who monitors U.N. renovation planning efforts. In addition, a senior official at the Ambassador level represents U.S. concerns on the renovation project to the United Nations and other member state representatives. To assist the task force, the department has also retained a part-time consultant with building construction and security experience. Although the Department of State has organized the task force, it has yet to develop an interagency framework that sets forth the task force’s mission or program goals. To monitor the project and coordinate the U.S. decision on whether to finance the renovation, the department will undertake diplomatic, federal financing and budgeting, and construction activities that will require participation from numerous government officials and organizations with the necessary expertise. A framework that describes the task force’s mission, program goals, and coordinating mechanisms will help ensure that each organization has a clear understanding of its role, responsibilities, and expectations. The development of this framework is important because the task force’s monitoring role is likely to continue through the four remaining phases of the renovation project. Furthermore, with established mission and program goals, the department could specify resource needs, including appropriate skills needed to achieve a successful outcome of the project. As the renovation proceeds and the management of the project increases in magnitude and complexity, the department can identify and obtain the critical skills that will be needed to monitor the project. Department officials have stated that they lacked the needed expertise to monitor a renovation project of this magnitude. It is also important that the task force is staffed appropriately because the Department of State will have a number of key questions and issues to address over the life of the project, particularly if the United States agrees to finance the renovation. Some key questions and issues to be addressed include the following: If the United States offers to finance the renovation, how would it structure a loan to the United Nations? What should be the loan’s terms and conditions? Would the United States provide a loan that fully funds the renovation project? If there are cost increases during the renovation, would the United States provide additional financing and, if so, under what terms and conditions? Does the United Nations have internal controls in place to effectively manage changes in costs, scope, and schedule throughout the design and construction of the project? To what extent are U.N. officials coordinating the renovation design and construction with that of the proposed new visitors center? What types of incentives will the United Nations include in its contracts with design and construction firms to ensure that their work meets U.N. expectations? Since the design phase provides the greatest opportunity to make decisions that could minimize future building maintenance and operating costs, to what extent are these future costs being considered during the design phase? How will U.N. officials ensure that value-engineering principles—a formal technique used by contractors or independent teams to identify methods of constructing projects more economically—are applied during the design and construction phases? The department’s position on each of these issues and the level of monitoring it will undertake will drive its resource and expertise needs. The United Nations has used a reasonable process thus far to develop its renovation plans, but it is still early in the project and changes in the schedule and cost estimates are to be expected. While the General Assembly has funded the project’s design, a commitment to finance the renovation will be needed by October 2003 for the United Nations to remain on its current schedule and sign a lease for the swing space. If the planned swing space is not available, the United Nations will have to reconsider its renovation approach, potentially leading to delays in the renovation process. The United States, however, has not yet taken a position on whether or how to finance the renovation. In addition, careful management and oversight of a project of this magnitude and complexity will be necessary to minimize schedule and scope changes. The renovation’s completion, final cost, and quality could be adversely affected if the United Nations does not provide adequate staff to manage the renovation and establish careful controls to limit scope changes. Continued monitoring by the Department of State will be critical as the project progresses and various issues arise, particularly if the United States finances the renovation. We recommend that the Secretary of State, in consultation with appropriate administration officials and other U.N. members, direct the U.S. representative to the United Nations to encourage the United Nations to complete and implement an effective project management plan that will guide decision making and coordination throughout the renovation project, and encourage the United Nations to provide the Office of Internal Oversight and the Board of Auditors with the resources needed to conduct effective oversight of the Capital Master Plan as the project progresses. In addition, to ensure that U.S. interests are effectively represented as the United Nations proceeds through the design phase, we recommend that the Secretary of State define the mission and program goals of the task force currently monitoring U.S. participation in the Capital Master Plan. We further recommend that the Secretary determine the expertise the task force needs to fulfill its role and ensure that it has the resources necessary to monitor the project over its duration. In commenting on a draft of this report, the United Nations and Department of State agreed with our findings and recommendations. However, the Board of Auditors disagreed with our recommendation calling for resources for the board to conduct oversight. A board official stated in April 2003 that the United Nations approved $35,000 for the board to conduct oversight of the $1.2 billion renovation project. We modified our recommendation to acknowledge the board’s initial funding, but we continue to recommend funding for the board’s oversight function over the course of the 6-year renovation project. Ensuring that the board has the necessary resources to conduct oversight will be important throughout the renovation. The board also provided technical comments to our report, which we incorporated as appropriate. Written comments from the United Nations, Department of State, Office of Internal Oversight Services, and Board of Auditors, along with our response, are in appendixes V through VIII. We provided the Office of Management and Budget with a draft of our report, but the office did not provide any comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of State; the U.S. Ambassador to the United Nations; the Director, U.S. Office of Management and Budget; the U.N. Secretary-General; and interested congressional committees. We also will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8979. Additional GAO contacts and staff acknowledgments are listed in Appendix IX. To assess the reasonableness of the process used by the Secretariat for project planning and development, including the cost estimate and security plans, we reviewed U.N. records, including reports developed by the architect-engineering firm and security assessments prepared for the United Nations. We also researched industry practices related to construction project planning and development, cost estimating, and security plan development. We compared the Secretariat’s efforts in project planning and cost estimating with industry practices as identified by the Federal Facilities Council and the Construction Industry Institute. We also reviewed the assumptions supporting the cost estimates, including contingencies and swing space costs. In assessing the process the Secretariat used to develop its security plan, we used industry-recognized guidelines as criteria. We also obtained input from U.S. federal agency security experts on the process used to develop the United Nations’ security plan. In addition, we reviewed other recently implemented or planned security initiatives and their interface with the security components of the Capital Master Plan. We discussed various aspects of the project, including the process by which the Capital Master Plan was developed, with U.N. renovation project staff and consultants. To assess the potential financial impact to the federal government of the renovation, we modeled the financial impact of a no-interest, 1 percent, and 2 percent loan to the United Nations. We did not assess the financial impact to the federal government of the renovation if the United Nations sought other financing options, or if the United Nations did not undertake the renovation and repaired or replaced major building systems as they failed. We reviewed the most current renovation cost estimates, the renovation cash flow statement, the U.N. Development Corporation cost estimate for swing space, interest rates for corporate and tax-exempt bonds, interest rate assumptions in the President’s budget for fiscal year 2004 and the Economic Report of the President (1999), and the 1948 loan agreement between the United Nations and the United States. We used the Office of Management and Budget’s Credit Subsidy Calculator to estimate the interest and default subsidies for interest-subsidized loans under various terms. In doing so, we made several key assumptions including the interest rate, the U.S. disbursal of funds, and a repayment plan. We then discussed our assumptions with Department of State, Congressional Budget Office, and Office of Management Budget officials. To analyze the critical milestones remaining in the renovation project, we reviewed the critical paths and the estimated schedules for the U.N. renovation and the U.N. Development Corporation’s proposed swing space building. We compared these critical paths and linked them to illustrate the necessary milestones and their sequence. We then clarified the sequence and duration of these milestones in interviews with Capital Master Plan staff at the United Nations, officials at the U.N. Development Corporation, and officials at the Department of State. In addition, we consulted with construction industry experts and legal counsel within GAO to evaluate and comment on the validity of the milestones’ sequence. To assess U.N. and Department of State efforts to monitor and oversee the renovation, we reviewed U.N. documents such as the Capital Master Plan, the U.N. renovation project management plan, the U.N. resolution pertaining to oversight of the Capital Master Plan, and the mission statements of the Office of Internal Oversight Services and the Board of Auditors. We subsequently spoke with U.N. and Department of State officials to determine their past and anticipated oversight roles and responsibilities in the U.N. renovation. In addition, we discussed the personnel required to adequately oversee the renovation, the funding received and requested for renovation monitoring, and the procedures in place for decision making and oversight. In conducting our review, we received the full cooperation of the United Nations, U.N. Development Corporation, U.S. Mission to the United Nations, and the Department of State. We conducted our review between June 2002 and April 2003, in accordance with generally accepted government auditing standards. In the August 2002 Capital Master Plan, the U.N. Secretary-General presented two approaches to renovating the U.N. headquarters complex. According to the Capital Master Plan, the unique conference room needs of the United Nations were a driving factor in the Secretary-General’s development of these approaches. The two approaches include temporarily relocating most U.N. activities during much of the construction work to swing space in a proposed new building near the U.N. complex (see fig. 6), or rotating U.N. staff through more limited swing space in a new four-story building constructed on the U.N. headquarters complex where the South Annex is currently located. The Secretary-General endorsed the first approach, and the General Assembly approved the development of renovation designs based on that approach. Under the first approach, most U.N. staff and activities would temporarily relocate to swing space in a proposed new office building near the U.N. complex during much of the renovation. U.N. consultants estimated that the renovation would take less than 5 years to complete and cost about $1.2 billion. As shown in table 2, the cost estimate includes a baseline scope— removing asbestos; replacing the electrical, plumbing, and climate control systems; and installing an upgraded fire suppression system. The cost estimate also includes leasing swing space for 4 years from the U.N. Development Corporation, a New York State nonprofit public benefit corporation tasked with constructing and leasing office space to the United Nations. Additional cost factors include the replacement of the Secretariat Building’s window structure and additional scope options that the General Assembly has not yet decided to include in the renovation. These options include additional safety and security measures, emergency backup provisions, and sustainability measures to address environmental goals. The cost estimate excludes construction of an additional conference room on the complex and security upgrades that the United Nations will complete before the renovation begins. The U.N. Development Corporation has offered to construct the swing space building. The new office building would be built on a park next to the U.N. Headquarters complex and connected to the existing complex by a tunnel. The United Nations currently plans to sign a long-term lease for the building with the U.N. Development Corporation. The building would be used as swing space during the 4 years of the renovation. Afterwards, the United Nations would relocate most of its New York City staff that currently work in office space outside the Headquarters complex to the swing space building. This would include relocating staff out of office space in two buildings currently leased from the U.N. Development Corporation. According to corporation officials, the corporation could then be able sell these two buildings and provide the proceeds to New York City. Under the second approach, the United Nations would replace the South Annex, a two-story building on the U.N. Headquarters complex, with a four- story building to use as swing space. The United Nations would lease additional office space as needed for swing space. The renovation work would occur in stages with five to ten floors of the U.N. Headquarters renovated while staff rotate through the swing space. To avoid excessive disruption, the meeting rooms would be renovated at night and on weekends. Under this approach, the renovation would take 6 years and cost more than $1.3 billion (see table 3). The total cost estimate is higher under Approach 2 because the renovation work would occur in stages since the limited swing space could not house all U.N. headquarters staff. Also, the United Nations would construct an additional conference room on the Headquarters complex. Under this approach, the swing space cost—replacing the South Annex and additional commercial leasing—would be less than leasing a swing space building from the U.N. Development Corporation. As with Approach 1, this cost estimate includes replacement of the Secretariat Building’s window structure and various scope options that the General Assembly has not yet decided to include. This cost estimate also excludes the security upgrades that the United Nations will complete before renovation begins. Although Approach 2 would cause the least disruption because meeting chambers would be renovated when the rooms were not in use, the risk of cost overruns, delays, and disturbance, and the perceived risk of exposure to asbestos is higher. Table 4 presents the firms that were involved in the conceptual planning process and noted in this report as consultants to the United Nations or subconsultants to the U.N. architect-engineering firm. In response to the terrorist attacks of September 11, 2001, the United Nations implemented emergency security measures and also accelerated its plans to implement some of the security measures that had been originally planned for the renovation. The United Nations also worked with its consultants to enhance the security component of the revised Capital Master Plan. According to U.N. officials, to more effectively coordinate the interface between the upgrades made after September 11, 2001, and the security measures in the Capital Master Plan, the United Nations has hired the same consultant to work on both packages, assigned some internal staff to oversee both projects, and calculated the cost of any overlap in security upgrade initiatives. As shown in figure 7, the recent and planned security measures for the U.N. Headquarters complex comprise four initiatives. This emergency initiative was introduced in December 2001 in response to the September 11, 2001 terrorist attacks. In late 2001, the United Nations organized a Senior Emergency Management Group to deal with major emergency situations at the U.N. Headquarters. The Secretary-General identified the most immediate, short-term security needs and requested additional funding. These measures, estimated to cost $3.7 million, included enhancements to the perimeter security and upgrades to the emergency response system on the complex and were largely implemented by March 2002. The strengthening security initiative also came as a result of the assessments the United Nations conducted after September 11, 2001. This initiative includes more long-term upgrades relative to the urgent measures implemented under the previous initiative. Some of the upgrades in this initiative (worth approximately $17 million) were part of the Capital Master Plan and their implementation was accelerated because of heightened security concerns. This initiative includes an upgraded access control system for the entire complex and renovations to the existing security control room. As of April 2003, the United Nations is designing these upgrades; U.N. officials expect them to be in place in 2004 at a total cost of $26 million. The U.N. security design consultant made 114 recommendations for the 2002 Capital Master Plan. The U.N. security staff, along with the consultant, prioritized those recommendations, creating a list of “highest priority” upgrades. These upgrades, totaling $77 million, were included as part of the baseline scope for the 2002 Capital Master Plan. The upgrades encompass additional visitor access control and blast resistance materials in certain areas on the complex. This initiative includes the U.N. security consultant’s remaining recommendations that were either (1) not the highest priority or (2) required coordination with New York City. Totaling $30 million, these upgrades include strengthening the building structure and installing vehicle barriers on some roads adjacent to the complex. The following are GAO’s comments on the letter from the U. N. Board of Auditors dated May 9, 2003. 1. The board stated that the United Nations had approved funds for the audit of the Capital Master Plan. A board official stated that approximately $35,000 was approved in April 2003 to conduct oversight of the renovation. We have included this information in the report. We modified our recommendation to acknowledge the board’s initial funding, but continue to recommend funding for the board’s oversight function over the course of the six-year renovation project. 2. We modified the report to reflect the board’s comment. 3. We modified the report to reflect the board’s comment. 4. See comment 1. 5. We modified the report to reflect the board’s comment. 6. We modified the report to reflect the board’s comment. 7. No change made. The comment does not provide additional clarity to the report. 8. See comment 1. Since the renovation is likely to continue for a number of years, ensuring that the board has the necessary resources to conduct oversight throughout the project will be critical. Accordingly, we have modified our recommendation to clarify our position. In addition to the individuals named above, Bruce Kutnick, Ronald King, Valérie L. Nowak, Maria Edelstein, Jeffrey T. Larson, Julia A. Roberts, Lynn Cothern, Jonathan Rose, and Barbara Shields made significant contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
The United Nations (U.N.) estimates that its planned renovation of the seven buildings on the Headquarters complex could cost almost $1.2 billion. As the host country and the largest contributor to the United Nations, the United States has a significant interest in this project. This report (1) assesses the reasonableness of the U.N. process to develop the renovation plans, (2) analyzes the potential cost to the United States, (3) identifies critical milestones before construction can begin, and (4) discusses efforts to monitor and oversee the project. U.N. officials followed a reasonable process consistent with leading industry practices and recognized guidelines in developing the headquarters renovation plan--the first phase of a five-phase renovation process. As the project advances, changes in scope, schedule, and cost are to be expected. To finance the renovation, the Secretary-General anticipates a no-interest loan from the United States. However, U.S. and U.N. officials stated that neither the United States nor the United Nations have specified the nature of any financing commitment. GAO estimates that the financial impact of the renovation to the federal government, including providing a $1.2 billion no-interest loan and repaying a share as a U.N. member, would be over $700 million, depending on the loan terms and conditions. Several critical milestones must be met for construction to begin as planned, including securing a financing commitment and signing a lease for a building where U.N. staff and delegates would relocate during the renovation. As the renovation project progresses, additional management, oversight, and monitoring is needed. The United Nations plans to complete a project management plan, which would help the United Nations control cost and schedule. While the United Nations has approved initial funding for the Board of Auditors to conduct oversight of the renovation and the board is preparing its audit strategy, the Office of Internal Oversight Services does not have the resources or audit strategies needed to effectively conduct oversight of the renovation. The Department of State has assembled a task force to monitor the renovation, but the department will need to define the task force's mission and program goals. Doing so would allow the department to develop strategies for employing the appropriate skill mix needed to achieve a successful outcome for the task force.
Medicaid is a health care program jointly funded by the federal government and states to provide care for certain low-income individuals. The federal government oversees states’ Medicaid programs, and typically pays from 50 to 83 percent of each state’s allowable Medicaid costs. Medicaid enrollees are entitled to receive a range of medical services, including hospital care, physician services, laboratory and other diagnostic tests, prescription drugs, dental care, and long-term care services. In addition, Medicaid provides assistance to low-income elderly individuals who are also eligible for Medicare, called “dual eligibles.”This assistance can include covering Medicare premiums and cost sharing. MSIS is a national Medicaid eligibility and claims data set, and is the federal source of Medicaid expenditure data that can be linked to a specific enrollee. State Medicaid agencies are required to provide CMS, through MSIS, quarterly electronic files approximately 45 days after a quarter has ended. These files contain: (1) persons covered by Medicaid, known as “eligible files”; and (2) adjudicated claims, known as the “paid claims file,” for medical services reimbursed by the Medicaid program. Each state’s eligible file contains one record for each person covered by Medicaid for at least 1 day during the reporting quarter. Individual eligible files consist of demographic and monthly enrollment data. Paid claims files contain information on medical service-related claims and capitation payments. MSIS data include enrollees’ eligibility status for Medicaid and the Children’s Health Insurance Program (CHIP), types of services received by enrollees, and expenditure data. MSIS data are used for policy analysis, program utilization, and forecasting expenditures. However, MSIS data are not used to determine the federal share of Medicaid expenditures, and are not used by the states to manage the daily operations of their Medicaid programs. The CMS-64 data set contains program-benefit costs and administrative expenses that are not linked to individual enrollees. State Medicaid agencies submit this information 30 days after a quarter has ended by means of the Quarterly Medicaid Statement of Expenditures for the Medical Assistance Program—also known as the form CMS-64—within the Medicaid Budget and Expenditure System. CMS-64 data are reported at a state aggregate level, such as a state’s total expenditures for such categories as inpatient hospital services and prescription drugs. Therefore, unlike MSIS, these data do not include individual expenditure data on the state’s enrollees or the services they received under Medicaid. Also unlike MSIS, CMS-64 contains expenditures that are not linked to specific enrollees, such as supplemental payments including Disproportionate Share Hospital (DSH) payments. CMS-64 data are the most-reliable and most-comprehensive information on Medicaid spending. Agency officials review expenditures submitted through CMS-64, and use the data to compute the federal financial participation for each state’s Medicaid program costs. Our reports have demonstrated the need for CMS to improve its oversight of this growing, complex program. In particular, federal internal-control standards, as documented in GAO’s Standards for Internal Control in the Federal Government, state that program managers need both operational and financial data to determine whether they are meeting their goals for accountability and efficient use of resources in order to make operating decisions, monitor performance, and allocate resources. Pertinent information should be identified, captured, and distributed in a form and time frame that permits people to perform their duties efficiently. These reports have also identified shortcomings with the MSIS and CMS-64 data sets, particularly in two areas: Medicaid program integrity and supplemental payments. See GAO, National Medicaid Audit Programs: CMS Should Improve Reporting and Focus on Audit Collaboration with States, GAO-12-627 (Washington, D.C.: June 14, 2012). including its expenditures and audit outcomes, program improvements, and plans for effectively monitoring the program. We also recently reported that the accountability and transparency of supplemental payments have been lacking using CMS-64. Specifically, our work found that states reported $32 billion in DSH and non-DSH Medicaid supplemental payments during fiscal year 2010. However, the exact amount of supplemental payments is unknown using CMS-64 because not all states reported their non-DSH supplemental payments separately from their regular payments. In an earlier report, we also found that information on non-DSH supplemental payments was incomplete because states did not provide full information to CMS regarding these payments. We noted that until reliable and complete information on states’ supplemental payments is available, federal officials overseeing the program and others will lack the information they need to review payments and ensure that they are appropriately spent for Medicaid purposes. MSIS Medicaid expenditure amounts nationwide were generally less than CMS-64 amounts. For fiscal years 2007, 2008, and 2009, total expenditures based on MSIS data for the nation were 86, 87, and 88 percent, respectively, of the amounts shown in CMS-64. In fiscal years 2007 through 2009, the difference in Medicaid expenditures between the two data sets decreased from about $46 billion in fiscal year 2007 to $43 billion in fiscal year 2009. For fiscal year 2009, the most-recent and most-complete data available, MSIS showed $323 billion in total expenditures compared with the $366 billion in CMS-64, a difference of $43 billion. (See table 1.) MSIS Medicaid expenditures for individual states were generally less than CMS-64 amounts. In fiscal years 2007 through 2009, states’ MSIS Medicaid expenditures ranged from 59 percent to 120 percent compared with CMS-64. In fiscal year 2009 alone, states’ MSIS Medicaid expenditures ranged from 59 to 119 percent of those in CMS-64. (See fig. 1.) Specifically, MSIS Medicaid expenditures were less than CMS-64 amounts in 40 states. These expenditures were greater than CMS-64 expenditures in 6 states, and were similar to CMS-64 expenditures in 5 states.reported in MSIS and CMS-64 by dollar amount in fiscal year 2009. Some—but not all—factors could be quantified to narrow the difference between MSIS and CMS-64 expenditures. In particular, we adjusted for expenditures that could not be attributed to individual beneficiaries—one of the key differences in the design of the data sets. However, we could not quantify the effect of other factors, such as inconsistent CMS guidance across the two data sets. MSIS is designed to report claims data, and CMS-64 is designed to reimburse states for their federal share of Medicaid expenditures. As we have noted, some expenditures that are required to be reported in CMS-64 do not appear in MSIS, such as when the expenditure is not tied to an individual enrollee’s claim. After adjusting the MSIS data to include expenditures for factors not related to individual enrollees’ claims, Medicaid expenditures for the nation based on MSIS data were 92, 93, and 94 percent of amounts shown in CMS-64 data, respectively, for fiscal years 2007, 2008, and 2009. (See table 2.) For fiscal year 2009, we were able to adjust MSIS data for four factors: DSH payments, Medicare premiums, national and state rebates for prescription drugs, and Medicaid health insurance payments. None of these factors were reported in MSIS because CMS officials indicated they were not attributed to an individual enrollee. DSH payments were included under the hospital expenditure category in CMS-64, but not in MSIS. We adjusted MSIS expenditure data to account for DSH payments to hospitals. For example, in fiscal year 2009, states reported approximately $88 billion in total hospital expenditures on CMS-64, which included approximately $18 billion for DSH payments. Total hospital expenditures in MSIS were at about $53 billion. Adding the $18 billion in DSH payments to total hospital expenditures in MSIS increased the percentage of MSIS expenditures from 60 percent of CMS-64 expenditures to 81 percent. Even after this adjustment, MSIS hospital expenditures are $17 billion lower than those on CMS-64, indicating that there are additional factors that account for the difference in hospital-related expenditures between the two data sets. Medicare premiums were included in CMS-64, but CMS did not require states to report them in MSIS. In addition, Medicaid payments for enrollees with Medicare coinsurance and deductibles are included in MSIS, but within the various service types, and cannot be distinguished from other expenditures. In fiscal year 2009, total expenditures for Medicare as reported on CMS-64 were approximately $11 billion, whereas MSIS expenditures for the Medicare category were $0. Adjusting for the approximately $11 billion reported in fiscal year 2009 for Medicare premiums in CMS-64 increased the percentage of total Medicare expenditures in MSIS from 0 percent of CMS-64 expenditures to 92 percent, or a difference of $908 million. Thus, the primary factor that accounted for the difference in Medicare expenditures between the two data sets can be attributed to the absence of Medicare premium expenditures in MSIS. Prescription drug rebates were included in CMS-64, but CMS did not Prescription drug rebates are require states to report them in MSIS.made by drug manufacturers to states in a lump sum payment for Medicaid enrollees who use specific drugs, and therefore are not connected to individual claims. In fiscal year 2009, total expenditures for the prescription drugs category, as reported on CMS-64, were initially about $25 billion. However, there was a reduction to $16 billion when $10 billion dollars in national and state prescription drug rebates were included. Total prescription drug expenditures in MSIS were approximately $25 billion. Adjusting for the $10 billion reported for prescription drug rebates in CMS-64 decreased the percentage of MSIS expenditures from 161 percent of CMS-64 expenditures to 99 percent of CMS-64 expenditures. Thus, the difference in reported prescription drug expenditures can be almost entirely attributed to the rebates. Some Medicaid health insurance payments were included in CMS-64 that CMS did not require states to report in MSIS.2009, total expenditures for the managed care and Medicaid premium assistance expenditure category, as reported on CMS-64, were approximately $82 billion, of which $3 billion were for Medicaid health insurance payments not reported in MSIS. Total managed care and Medicaid premium assistance expenditures, as reported in MSIS, were approximately $85 billion. Adjusting for the roughly $3 billion for Medicaid health insurance payments increased the percentage of MSIS expenditures from 100 percent of CMS-64 expenditures to more than 103 percent. Therefore, total managed care and Medicaid premium assistance expenditures in MSIS were greater than those reported in CMS-64. (Fig. 4 compares the MSIS baseline to the MSIS adjusted expenditures as a percentage of CMS-64, by expenditure category, for fiscal year 2009.) In fiscal year 2009, the difference between MSIS and CMS-64 was $43 billion. Much of the difference was primarily the result of the different designs of each data set. CMS uses MSIS data for beneficiary-specific expenditures, while CMS-64 data are used to compute the federal financial participation for each state’s Medicaid program costs. However, even after adjusting for DSH payments, Medicare premiums, prescription drug rebates, and Medicaid health insurance payments, differences remain. In fiscal year 2009, total MSIS expenditure data, after adjustments, showed MSIS at 94 percent of CMS-64 expenditures, which left billions of dollars unexplained. The remaining differences between the two data sets are potentially explained by inconsistencies in CMS guidance and states’ reporting practices, neither of which can be quantified. In fiscal years 2007 through 2009, CMS provided states with inconsistent MSIS and CMS-64 guidance regarding expenditure definitions, reporting dates, and reporting of supplemental payments. Additionally, when compared with CMS-64, state MSIS data were often delayed beyond the time frames established by CMS, inconsistent in reporting payments for local government providers, and were of poor quality. Taken together, these two data sets have the potential to offer a robust view of the Medicaid program, enhancing CMS oversight of aggregate spending trends, per beneficiary spending growth, and cross-state comparisons, all of which could be useful in improving the financial integrity of this high-risk program. This is critical given that Medicaid, a program that GAO identified on our high-risk list, has among the highest estimated improper payments of any federal program reporting such data. However, the usefulness of these data sets as oversight tools is limited because of delays in reporting and unnecessary inconsistencies between the two data sets, both of which are inconsistent with federal internal control standards. The 3-year lag in states’ reporting of MSIS data prevents its use for timely oversight of beneficiary-related utilization and other spending trends. For example, identifying a difference in hospital expenditures between MSIS and CMS-64 is of limited use when detected 3 years later. If states were meeting the current requirement of providing MSIS data 45 days after each quarter, then such comparisons could provide more useful and timely information. CMS has recently completed a pilot study aimed in part on improving MSIS data. CMS has indicated that it will begin implementing aspects of this initiative in all states by fiscal year 2014. One goal of this initiative is to integrate state reporting of MSIS with the reporting of CMS-64 data. However, CMS officials have indicated they have yet to determine a timeline for this goal. While the initial results of this pilot have not been finalized, improving the timeliness and consistency of MSIS and CMS-64 data through this effort could aid CMS’s understanding and oversight of this high-risk program. HHS reviewed a draft of this report and provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the Administrator of CMS, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. We conducted two types of analysis of Medicaid expenditures for this report to compare the Medicaid Statistical Information System (MSIS) and CMS-64 data. First, we conducted a baseline analysis to compare total expenditures reported to MSIS and CMS-64 nationally and by state. We also compared reported expenditures by expenditure category. Secondly, we conducted an adjustment analysis. Specifically, we attempted to reconcile the expenditures by making adjustments to MSIS on the basis of the differences we identified in the MSIS and CMS-64 data sets. To the extent possible, we added expenditures reported in CMS-64, but not reported in MSIS, to total expenditures in MSIS. To determine the extent to which MSIS and CMS-64 data sets on Medicaid expenditures differ, nationally and by state, for fiscal years 2007 through 2009, we conducted a multistep analysis.expenditures from both data sets, we calculated the expenditures reported in MSIS as a percentage of those reported in CMS-64. We also determined the percentage difference in total expenditures between the two data sets, by expenditure category. Step 1: Obtain MSIS and CMS-64 Data In order to identify the baseline of total Medicaid expenditures for fiscal years 2007 through 2009, we obtained from the Center for Medicare & Medicaid Services (CMS) MSIS and CMS-64 data for each of the 50 states and the District of Columbia. MSIS data: We obtained the MSIS Annual Person Summary File data, which CMS officials confirmed were appropriate for our analysis. The summary file includes monthly enrollment data, but only includes annual expenditure data by type of service. It is most comparable to CMS-64 because both data sets provide total expenditures for the fiscal year. The summary file also includes some enrollment and expenditure information on the Children’s Health Insurance Program (CHIP). CMS-64 data: We used the CMS-64 net expenditures financial management report within the Medicaid Budget and Expenditure System. The financial management report is an annual account of states’ program and administrative Medicaid expenditures, including federal and state expenditures by expenditure category. The financial management report also contains information solely related to Medicaid. Step 2: Determine Medicaid Expenditures in MSIS for the Fiscal Year on the Basis of the Number of Medicaid Enrollees Because MSIS includes both Medicaid and CHIP expenditures, we separated these expenditures to the extent possible. To do this, we first determined whether a beneficiary was eligible for Medicaid or CHIP by using the monthly eligibility code in MSIS. The Annual Person Summary File ties an entire month’s expenditures to an individual on the basis of his or her monthly eligibility code. The next step was to remove, to the extent possible, the CHIP expenditures in order to match MSIS expenditures with CMS-64. (CMS-64 does not include CHIP expenditures.) CMS officials indicated that states were able to report CHIP expenditures within MSIS if they had a Medicaid expansion-CHIP program (i.e., a CHIP program that operates as part of the Medicaid program), also known as M-CHIP. Consequently, we were able to distinguish between Medicaid and M-CHIP spending. We thus removed from all MSIS totals any M-CHIP expenditures to the extent possible. We determined the number of months in the fiscal year a beneficiary received benefits under Medicaid or M-CHIP. On the basis of this count, we prorated the enrollees’ total expenses for the fiscal year on the basis of the proportion of the year a person was enrolled in either M-CHIP or Medicaid. If a person was enrolled in the M-CHIP program for part of the year, then a portion of the annual spending was apportioned toward M-CHIP. For example, if a person was enrolled in M-CHIP for 3 months, then a quarter of the expenditures would be considered M-CHIP, and the remaining three-quarters of expenditures would be considered Medicaid. Consequently, the Medicaid expenditures include prorated amounts for Medicaid and M-CHIP enrollees. In contrast, CMS officials told us that until fiscal year 2010, states with a stand-alone CHIP program (i.e., one that is operated separately from a Medicaid program by the state; also known as S-CHIP) were not supposed to report these expenditures into MSIS despite the presence of the S-CHIP eligibility code. CMS officials indicated that any expenditures associated with S-CHIP should be assumed to be Medicaid expenditures despite the CHIP eligibility code. Therefore, we included any expenditures associated with the S-CHIP code as Medicaid expenditures. Lastly, we included as Medicaid expenditures any other expenditures associated with all other eligibility categories. These other eligibility variables included “unknown” and “ineligible,” as well as Medicaid. Consequently, for states with standalone S-CHIP programs and those that did not separate out M-CHIP expenditures, the total Medicaid expenditures may be inflated in MSIS. Similarly, any expenditures reported under the unknown and ineligible categories that were not Medicaid expenditures may also inflate Medicaid expenditures. Step 3: Matching Inconsistent Definitions of Services between MSIS and CMS-64 MSIS and CMS-64 data consist of expenditures broken down by service types.definitions of Medicaid service types used in each data set. This is necessary because, in many instances, a one-to-one match of service types in MSIS to those in CMS-64 is not possible. For example, in fiscal years 2007 through 2009, the MSIS Annual Person Summary File had 29 service types, whereas CMS-64 had 43. (See table 3 for a list of MSIS and CMS-64 service types used in fiscal years 2007 through 2009.) Appendix V: Adjusted MSIS Expenditures as a Percentage of CMS-64, by State and Expenditure Category, Fiscal Year 2009 Adjustments were made by adding expenditures reported in CMS-64 to those reported in the Medicaid Statistical Information System (MSIS). Specifically, we added Disproportionate Share Hospital payments, national and state rebates for prescription drugs, Medicaid health insurance payments, and Medicare premiums to the expenditures reported in MSIS. No adjustments were made for acute and long-term support services (LTSS)-noninstitutional and LTSS-institutional. In addition to the contact named above, Robert Copeland, Assistant Director; Muriel Brown; Shaunessye Curry; Greg Dybalski; Sandra George; Giselle Hicks; Drew Long; Jessica Morris; and Monica Perez Nelson made key contributions to this report. Medicaid: States Reported Billions More in Supplemental Payments in Recent Years. GAO-12-694. Washington, D.C.: July 20, 2012. National Medicaid Audit Programs: CMS Should Improve Reporting and Focus on Audit Collaboration with States. GAO-12-627. Washington, D.C.: June 14, 2012. High Risk Series: An Update. GAO-11-278. Washington, D.C.: February 16, 2011. Medicaid: CMS Needs More Information on the Billions of Dollars Spent on Supplemental Payments. GAO-08-614. Washington, D.C.: May 30, 2008. Medicaid Demonstration Waivers: Recent HHS Approvals Continue to Raise Cost and Oversight Concerns. GAO-08-87. Washington, D.C.: January 31, 2008. Medicaid Financial Management: Steps Taken to Improve Federal Oversight but Other Actions Needed to Sustain Efforts. GAO-06-705. Washington, D.C.: June 22, 2006. Major Management Challenges and Program Risks: Department of Health and Human Services. GAO-03-101. Washington, D.C.: January 1, 2003. Medicaid Financial Management: Better Oversight of State Claims for Federal Reimbursement Needed. GAO-02-300. Washington, D.C.: February 28, 2002.
CMS, within the Department of Health and Human Services, and state Medicaid agencies jointly administer the multibillion-dollar Medicaid program, which finances health care for certain low-income individuals. Medicaid is on GAO’s high-risk list because of vulnerabilities to waste, fraud, abuse, and mismanagement. CMS has two data sets that report state Medicaid expenditures. The MSIS data set is designed to report individual beneficiary claims data. The CMS-64 data set aggregates states’ expenditures, which are used to reimburse the states for their Medicaid expenditures. However, neither data set provides a complete picture of Medicaid expenditures. GAO was asked to compare MSIS and CMS-64 data. This report (1) examines the extent to which MSIS and CMS-64 expenditure data differ and (2) where possible, quantifies the identified differences between the two data sets. GAO reviewed documents, compared Medicaid expenditure data, and interviewed CMS and state officials. GAO used fiscal years 2007 through 2009 data—the most-recent and most-complete data available. Medicaid expenditures in the Medicaid Statistical Information System (MSIS) were generally less than CMS-64 amounts. National expenditures in MSIS were 86, 87, and 88 percent of the amounts in CMS-64 in fiscal years 2007 through 2009, respectively. In fiscal year 2009, MSIS expenditures for states ranged from 59 to 119 percent of CMS-64. Specifically, 40 states reported lower expenditures in MSIS than CMS-64; 5 states and the District of Columbia reported higher expenditures; and 5 states reported similar levels of expenditures. GAO was able to quantify some, but not all, of the identified differences in expenditures between MSIS and the CMS-64. GAO adjusted MSIS for expenditures that were not attributed to individual beneficiaries--such as prescription drug rebates. These adjustments increased MSIS to 92, 93, and 94 percent of the amounts in CMS-64 in fiscal years 2007 through 2009, respectively. GAO could not account for the remaining differences in part because of inconsistencies in the Centers for Medicare & Medicaid (CMS) guidance between the two data sets. For example, CMS officials explained that expenditures for inpatient services as reported by a state in MSIS and as reported in CMS-64 are not necessarily for the same services. GAO also found that states do not submit timely MSIS information. CMS requires states to submit MSIS data within 45 days and CMS-64 data within 30 days of the end of the quarter. However, states' reporting of MSIS data can be up to 3 years late, whereas CMS-64 data are consistently reported on time. Also, MSIS expenditure data are considered less reliable when compared with CMS-64. GAO has reported that CMS will need more reliable data for assessing expenditures and measuring performance in the Medicaid program. MSIS and CMS-64 have the potential to offer a robust view of the Medicaid program, enhancing CMS oversight of aggregate spending trends, per beneficiary spending growth, and cross-state comparisons, all of which could be useful in improving the financial integrity of this high-risk program. However, delays in reporting MSIS data and inconsistencies between the two data sets limit their usefulness as oversight tools. CMS has recently completed a pilot study aimed in part at improving the timeliness and consistency of both systems data. HHS provided technical comments on a draft of this report, which were incorporated as appropriate.
HCFA used a test methodology that was comparable with processes followed by other public insurers who have successfully tested and implemented such commercial systems. Other public insurers—such as the military’s TRICARE, Veterans Affairs’ CHAMPVA, and the Kansas and Mississippi Medicaid offices—each used four key steps to test their claims-auditing systems prior to implementation. Specifically, they (1) performed detailed comparisons of their payment policies with systems’ edits to determine where conflicts existed, (2) modified the commercial systems’ edits to comply with their payment policies, (3) integrated the systems into their claims payment systems, and (4) conducted operational tests to ensure that the integrated systems processed claims properly. This is a comprehensive approach that requires significant time to complete. For example, TRICARE took about 18 months for two sites and allowed about 2 years for its remaining nine sites. HCFA’s approach was similar. From contract award on September 30, 1996, through its conclusion 15 months later at the end of December 1997, both HCFA and contractor staff made significant progress in integrating the test commercial system and evaluating its potential for Medicare use nationwide. HCFA used both a policy evaluation team and a technical team to concentrate separately on these aspects of the test. A detailed comparison of the commercial system’s payment policies with those of Medicare identified conflicting edits—inconsistencies that in some cases would increase and in others decrease the amount of the Medicare payments. For example, the commercial system would pay for the higher cost procedure of those deemed mutually exclusive, while Medicare dictates paying for the lower cost procedure. (A mutually exclusive procedure would be, for instance, the same patient’s receiving both an open and a closed treatment for a fracture.) Conversely, the commercial claims-auditing system would deny certain payments for assistant surgeons, while Medicare allows them. These and all other identified conflicts were provided to the vendor, who modified the system’s edits to make them consistent with HCFA policy. The technical team carried out three critical tasks. First, it developed the design specifications and related computer code necessary for integrating the commercial system into the Medicare claims-processing software. Second, it integrated the claims-auditing system into the system that processes Medicare part B claims. Finally, the team conducted numerous tests of the integrated system to determine its effect both on processing speed and accuracy. HCFA management was kept apprised of the status of the test through regular progress reports and frequent contact with the project management team. HCFA found that the edits in this commercial system could save Medicare up to $465 million annually by identifying inappropriate claims. Specifically, HCFA’s analysis showed that the system’s mutually exclusive and incidental procedure edits would save about $205 million, and the diagnosis-to-procedure edits could save about $260 million. HCFA’s analysis was based on a national sample of paid claims already processed by Medicare part B and audited for inappropriate coding with HCFA’s internal software. We reviewed the reports of HCFA’s estimated savings, but did not independently verify the national sample from which these savings were derived. However, the magnitude of savings—$682 million, including the savings derived from HCFA’s internal software, which HCFA reported at $217 million for 1996—is in line with our 1995 estimate that about $600 million in annual savings are possible. On November 25, 1997, HCFA officials notified the Administrator of the successful test of the commercial system. This was a far different conclusion than the one reported by HCFA 2 months earlier, while testing was ongoing. At a September 29, 1997, hearing before this subcommittee, a senior HCFA official stated that the agency was testing the commercial system as a stand-alone system against Medicare’s claims-processing system. He testified that “for the month of August, our system, the CCI system achieves savings of $422,000 more than the system would have achieved if that would have been what we were using. We were outperforming a product.” However, as we testified at that same hearing, the test needed to compare the commercial system as a supplement to the existing one, rather than as a replacement. Before HCFA completed its test it did compare the commercial system as a supplement. This comparison showed that commercial systems offer the potential for substantial Medicare savings. Despite the successful outcome of the test, two early management decisions, if left unchanged, would have significantly delayed national implementation of claims-auditing software in the Medicare program. First, the use of the test system was limited to its single Iowa location, thereby requiring another contract for nationwide implementation. Second, HCFA’s initial plan following the test was to proceed with developing its own edits, rather than to acquire those available through commercial systems. This plan would not only have required additional time before implementation, but could well have resulted in a system less comprehensive in its capacity to flag suspect claims than what is available commercially. I would now like to provide some details surrounding both of these decisions. HCFA’s contract limited the use of the test system to its Iowa site and did not include a provision for implementation throughout the Medicare program if the test proved successful. As a result, additional time will now be needed to award another contract to implement the test system’s claims-auditing software or any other approach nationwide. According to a HCFA contracting official, it could take as much as a year to award another contract using “full and open” competition—the method normally used for such implementation. This entails preparing for and issuing a request for proposals, evaluating the resulting bids, and awarding the contract. HCFA’s estimated savings of up to $465 million per year demonstrates the costs associated with delays in implementing such payment controls nationwide. Along with additional time and lost savings from the lack of early nationwide implementation, awarding a new contract could result in additional expense to either develop new edits or for substantial rework to adapt the new system’s edits to HCFA’s payment policies if a contractor other than the one performing the original test wins the competition. If another contractor were to become involved, much of the work HCFA performed during the test period would have to be redone. Specifically, another company’s claims-auditing edits would have to be evaluated for potential conflict with agency payment policy. Other options were open to HCFA from the beginning. For example, HCFA could have followed the approach used by TRICARE, whose contract provided for a phased, 3-year implementation at its 11 processing sites following successful testing. According to HCFA’s Administrator, the agency is doing what it can to avoid any delays resulting from the limited test contract. The Administrator said HCFA is evaluating legal options to determine if other contracting avenues are available, options that would allow expedited national implementation of commercial claims-auditing software. In reporting the test results, HCFA representatives recommended that the HCFA Administrator award a contract to develop HCFA-owned claims-auditing edits to supplement the correct coding initiative, rather than acquire these edits commercially. They provided the following rationale: First, this approach could cost substantially less than commercial edits because HCFA would have the option of changing contractors for edit updates, it would not have to pay annual licensing fees, and the developmental cost would be much less than purchasing the capability commercially. Second, according to HCFA officials, this approach would result in HCFA-owned claims-auditing edits, which are in the public domain and consequently allow HCFA to disclose its policies and coding combinations to providers, as it currently does with the correct coding initiative edits. Officials also explained that if a commercial vendor bid, won, and agreed to allow its claims-auditing edits to enter the public domain, HCFA would allow the vendor to start with its existing edits, which should shorten development time. We found serious flaws in this approach—in terms of cost, overall effectiveness, and underlying assumptions. First, upgrading the edits by moving from the initial contract developer to one unfamiliar with them would not be easy or inexpensive; it is a major task, facilitated by a thorough clinical knowledge of the existing edits. Second, the annual licensing fees that HCFA would avoid with its own edits would be offset to some degree by the need to pay a contractor with the clinical expertise to keep the edits current. Third, while the commercial software could cost more than developing HCFA-owned edits, this increased cost has already been more than justified by HCFA’s test results demonstrating that commercial edits provide significantly more Medicare savings. Finally, the cost of delay is significant: HCFA has realized no savings from such commercial software over the past 6 years. Moreover, we found that HCFA’s plan to fully disclose its edits to the medical community is not required by federal law and is not followed by other public insurers; it could also result in limiting the number of potential contractors with an interest in bidding. In May 1995 HHS’ Office of General Counsel informed HCFA that no federal law or regulation precludes it from protecting the proprietary nature of the edits and the related computer logic used in commercial claims-auditing systems. Further, HCFA’s Deputy Director of the Provider Purchasing and Administration Group stated that the agency has no explicit Medicare policy requiring it to disclose to providers the specific edits used to audit their claims. Rather than disclosing the edits, other public insurers, such as CHAMPVA and TRICARE, notified providers that they were implementing the system, and supplied examples of categories of edits that would be used to check for such disparities as mutually exclusive claims. Finally, while it is true that development time would likely be shortened if a commercial claims-auditing vendor were awarded the contract and used its existing edits as a starting point, it is doubtful that such vendors would bid on the contract if resulting edits were to be in the public domain. This response was confirmed to us by an executive of a company that has already developed a claims-auditing system; he said he would not enter into such a contractual agreement if HCFA insisted on making the edits public because this would result in the loss of the proprietary rights to his company’s claims-auditing edits. HCFA’s plan to develop its own edits was also inconsistent with Office of Management and Budget (OMB) policy in acquiring information resources. HCFA has not demonstrated the cost-effectiveness of its plan to develop edits internally. In fact, a prime example showing otherwise is HCFA’s own estimate that every year it delays implementing claims-auditing edits of the caliber of those used in the commercial test system in Iowa, about $465 million in savings could be lost. Developing comprehensive HCFA-owned claims-auditing edits could take years, during which time hundreds of millions of dollars could be lost annually due to incorrectly coded claims. To illustrate: HCFA began developing its database of edits in 1991 and has continued to improve it over the past 6 years. While HCFA reported that its correct coding initiative identified $217 million in savings in 1996 (in the mutually exclusive and incidental procedure categories), this database did not identify an additional $205 million in those categories identified by the test edits, nor does it address the diagnosis-to-procedure category, where the test edits identified an additional $260 million in possible savings. HCFA has no assurance that its own edits would be as effective as those available commercially. This past March, after considering our findings and other factors, the HCFA Administrator said that the agency’s plans had changed. She said that HCFA plans to begin immediately to acquire and implement commercial claims-auditing software in as expedited a manner as possible. We are encouraged that after a slow start, HCFA now plans to move quickly to take advantage of the comprehensive claims-auditing capability that is available, and we are looking forward to seeing HCFA’s milestones for expeditiously implementing this capability. Typically, such milestones would include dates for awarding a contract for the commercial claims-auditing edits, initiating and completing implementation at the first Medicare site, and implementing the edits at the remaining Medicare processing sites. Mr. Chairman, this concludes my statement. I would be happy to respond to any questions that you or other members of the Subcommittee may have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed: (1) the Health Care Financing Administration's (HCFA) progress in testing and acquiring a commercial system for identifying inappropriate Medicare bills; (2) how HCFA tested this commercial system; (3) HCFA's initial management decisions and their consequences; and (4) HCFA's plans for immediate implementation. GAO noted that: (1) from contract award on September 30, 1996, through its conclusion at the end of December 1997, both HCFA and contractor staff made significant progress in integrating the test commercial system and evaluating its potential for Medicare use nationwide; (2) HCFA used both a policy evaluation team and a technical team to concentrate separately on aspects of the test; (3) a detailed comparison of the commercial system's payment policies with those of Medicare identified conflicting edits--inconsistencies that in some cases would increase and in others decrease the amount of the Medicare payments; (4) the technical team: (a) developed the design specifications and related computer code necessary for integrating the commercial system into Medicare claims-processing software; (b) integrated the claims-auditing system into the system that processes Medicare part B claims; and (c) conducted numerous tests of the integrated system to determine its effect both on processing speed and accuracy; (5) HCFA management was kept apprised of the status of the test through regular reports and frequent contact with the project management team; (6) HCFA's contract limited the use of the test system to its Iowa site and did not include a provision for implementation throughout the Medicare program if the test proved successful; (7) additional time will be needed to award another contract to implement the test system's claims-auditing software; (8) HCFA representatives recommended that the HCFA Administrator award a contract to develop HCFA-owned claims-auditing edits to supplement the correct coding initiative, rather than acquire these edits commercially; (9) GAO found serious flaws in terms of cost, overall effectiveness, and underlying assumptions; (10) HCFA's plan to develop its own edits was also inconsistent with Office of Management and Budget policy in acquiring information resources; (11) HCFA has not demonstrated the cost-effectiveness of its plan to develop edits internally; (12) HCFA plans to begin immediately to acquire and implement commercial claims-auditing software in as expedient a manner as possible; and (13) HCFA now plans to move quickly to take advantage of the comprehensive claims-auditing capability that is available.
Since our last high-risk update, while progress has varied, many of the 32 high-risk areas on our 2015 list have shown solid progress. One area related to sharing and managing terrorism-related information is now being removed from the list. Agencies can show progress by addressing our five criteria for removal from the list: leadership commitment, capacity, action plan, monitoring, and demonstrated progress. As shown in table 1, 23 high-risk areas, or two-thirds of all the areas, have met or partially met all five criteria for removal from our High-Risk List; 15 of these areas fully met at least one criterion. Compared with our last assessment, 11 high-risk areas showed progress in one or more of the five criteria. Two areas declined since 2015. These changes are indicated by the up and down arrows in table 1. Of the 11 high-risk areas showing progress between 2015 and 2017, sufficient progress was made in 1 area—Establishing Effective Mechanisms for Sharing and Managing Terrorism-Related Information to Protect the Homeland—to be removed from the list. In two other areas, enough progress was made that we removed a segment of the high-risk area—Mitigating Gaps in Weather Satellite Data and Department of Defense (DOD) Supply Chain Management. The other eight areas improved in at least one criterion rating by either moving from “not met” to “partially met” or from “partially met” to “met.” We removed the area of Establishing Effective Mechanisms for Sharing and Managing Terrorism-Related Information to Protect the Homeland from the High-Risk List because the Program Manager for the Information Sharing Environment (ISE) and key departments and agencies have made significant progress to strengthen how intelligence on terrorism, homeland security, and law enforcement, as well as other information (collectively referred to in this section as terrorism-related information), is shared among federal, state, local, tribal, international, and private sector partners. As a result, the Program Manager and key stakeholders have met all five criteria for addressing our high-risk designation, and we are removing this issue from our High-Risk List. While this progress is commendable, it does not mean the government has eliminated all risk associated with sharing terrorism-related information. It remains imperative that the Program Manager and key departments and agencies continue their efforts to advance and sustain ISE. Continued oversight and attention is also warranted given the issue’s direct relevance to homeland security as well as the constant evolution of terrorist threats and changing technology. The Program Manager, the individual responsible for planning, overseeing, and managing ISE, along with the key departments and agencies—the Departments of Homeland Security (DHS), Justice (DOJ), State (State), and Defense (DOD), and the Office of the Director of National Intelligence (ODNI)—are critical to implementing and sustaining ISE. Following the terrorist attacks of 2001, Congress and the executive branch took numerous actions aimed explicitly at establishing a range of new measures to strengthen the nation’s ability to identify, detect, and deter terrorism-related activities. For example, ISE was established in accordance with the Intelligence Reform and Terrorism Prevention Act of 2004 (Intelligence Reform Act) to facilitate the sharing of terrorism-related information. Figure 1 depicts the relationship between the various stakeholders and disciplines involved with the sharing and safeguarding of terrorism-related information through ISE. The Program Manager and key departments and agencies met the leadership commitment and capacity criteria in 2015, and have subsequently sustained efforts in both these areas. For example, the Program Manager clearly articulated a vision for ISE that reflects the government’s terrorism-related information sharing priorities. Key departments and agencies also continued to allocate resources to operations that improve information sharing, including developing better technical capabilities. The Program Manager and key departments and agencies also developed, generally agreed upon, and executed the 2013 Strategic Implementation Plan (Implementation Plan), which includes the overall strategy and more specific planning steps to achieve ISE. Further, they have demonstrated that various information-sharing initiatives are being used across multiple agencies as well as state, local, and private-sector stakeholders. For example, the project manager has developed a comprehensive framework for managing enterprise architecture to help share and integrate terrorism-related information among multiple stakeholders in ISE. Specifically, the Project Interoperability initiative includes technical resources and other guidance that promote greater information system compatibility and performance. Furthermore, the key departments and agencies have applied the concepts of the Project Interoperability Initiative to improve mission operations by better linking different law enforcement databases and facilitating better geospatial analysis, among other things. In addition, the Program Manager and key departments and agencies have continued to devise and implement ways to measure the effect of ISE on information sharing to address terrorist and other threats to the homeland. They developed performance metrics for specific information- sharing initiatives (e.g., fusion centers) used by various stakeholders to receive and share information. The Program Manager and key departments and agencies have also documented mission-specific accomplishments (e.g., related to maritime domain awareness) where the Program Manager helped connect previously incompatible information systems. The Program Manager has also partnered with DHS to create an Information Sharing Measure Development Pilot that intends to better measure the effectiveness of information sharing across all levels of ISE. Further, the Program Manager and key departments and agencies have used the Implementation Plan to track progress, address challenges, and substantially achieve the objectives in the National Strategy for Information Sharing and Safeguarding. The Implementation Plan contains 16 priority objectives, and by the end of fiscal year 2016, 13 of the 16 priority objectives were completed. The Program Manager transferred the remaining three objectives, which were all underway, to other entities with the appropriate technical expertise to continue implementation through fiscal year 2019. In our 2013 high-risk update, we listed nine action items that were critical for moving ISE forward. In that report, we determined that two of those action items—demonstrating that the leadership structure has the needed authority to leverage participating departments, and updating the vision for ISE—had been completed. In our 2015 update, we determined that the Program Manager and key departments had achieved four of the seven remaining action items—demonstrating that departments are defining incremental costs and funding; continuing to identify technological capabilities and services that can be shared collaboratively; demonstrating that initiatives within individual departments are, or will be, leveraged to benefit all stakeholders; and demonstrating that stakeholders generally agree with the strategy, plans, time frames, responsibilities, and activities for substantially achieving ISE. For the 2017 update, we determined that the remaining three action items have been completed: establishing an enterprise architecture management capability; demonstrating that the federal government can show, or is more fully developing a set of metrics to measure, the extent to which sharing has improved under ISE; and demonstrating that established milestones and time frames are being used as baselines to track and monitor progress. Achieving all nine action items has, in effect, addressed our high-risk criteria. While this demonstrates significant and important progress, sharing terrorism-related information remains a constantly evolving work in progress that requires continued effort and attention from the Program Manager, departments, and agencies. Although no longer a high-risk issue, sharing terrorism-related information remains an area with some risk, and continues to be vitally important to homeland security, requiring ongoing oversight as well as continuous improvement to identify and respond to changing threats and technology. Table 2 summarizes the Program Manager’s and key departments’ and agencies’ progress in achieving the action items. As we have with areas previously removed from the High-Risk List, we will continue to monitor this area, as appropriate, to ensure that the improvements we have noted are sustained. If significant problems again arise, we will consider reapplying the high-risk designation. Additional Information on Establishing Effective Mechanisms for Sharing and Managing Terrorism-Related Information to Protect the Homeland is provided on page 653 of the report. In the 2 years since our last high-risk update, sufficient progress has been made in two areas—DOD Supply Chain Management and Mitigating Gaps in Weather Satellite Data—that we are narrowing their scope. DOD manages about 4.9 million secondary inventory items, such as spare parts, with a reported value of approximately $91 billion as of September 2015. Since 1990, DOD’s inventory management has been included on our High-Risk List due to the accumulation of excess inventory and weaknesses in demand forecasting for spare parts. In addition to DOD’s inventory management, the supply chain management high-risk area focuses on materiel distribution and asset visibility within DOD. Based on DOD’s leadership commitment and demonstrated progress to address weaknesses since 2010, we are removing the inventory management component from the supply chain management high-risk area. Specifically, DOD has taken the following actions: Implemented a congressionally mandated inventory management corrective action plan and institutionalized a performance management framework, including regular performance reviews and standardized metrics. DOD has also developed and begun implementing a follow-on improvement plan. Reduced the percentage and value of its “on-order excess inventory” (i.e., items already purchased that may be excess due to subsequent changes in requirements) and “on-hand excess inventory” (i.e., items categorized for potential reuse or disposal). DOD’s data show that the proportion of on-order excess inventory to the total amount of on- order inventory decreased from 9.5 percent at the end of fiscal year 2009 to 7 percent at the end of fiscal year 2015, the most recent fiscal year for which data are available. During these years, the value of on- order excess inventory also decreased from $1.3 billion to $701 million. DOD’s data show that the proportion of on-hand excess inventory to the total amount of on-hand inventory dropped from 9.4 percent at the end of fiscal year 2009 to 7.3 percent at the end of fiscal year 2015. The value of on-hand excess inventory also decreased during these years from $8.8 billion to $6.8 billion. Implemented numerous actions to improve demand forecasting and began tracking department-wide forecasting accuracy metrics in 2013, resulting in forecast accuracy improving from 46.7 percent in fiscal year 2013 to 57.4 percent in fiscal year 2015, the latest fiscal year for which complete data are available. Implemented 42 of our recommendations since 2006 and is taking actions to implement an additional 13 recommendations, which are focused generally on reassessing inventory goals, improving collaborative forecasting, and making changes to information technology (IT) systems used to manage inventory. Additional information on DOD Supply Chain Management is provided on page 248 of the report. Mitigating Gaps in Weather Satellite Data The United States relies on two complementary types of satellite systems for weather observations and forecasts: (1) polar-orbiting satellites that provide a global perspective every morning and afternoon, and (2) geostationary satellites that maintain a fixed view of the United States. Both types of systems are critical to weather forecasters, climatologists, and the military, who map and monitor changes in weather, climate, the oceans, and the environment. Federal agencies are planning or executing major satellite acquisition programs to replace existing polar and geostationary satellite systems that are nearing or beyond the end of their expected life spans. The Department of Commerce’s National Oceanic and Atmospheric Administration (NOAA) is responsible for the polar satellite program that crosses the equator in the afternoon and for the nation’s geostationary weather satellite program; DOD is responsible for the polar satellite program that crosses the equator in the early morning orbit. Over the last several years, we have reported on the potential for a gap in satellite data between the time that the current satellites are expected to reach the end of their lifespans and the time when the next satellites are expected to be in orbit and operational. We added this area to our High- Risk List in 2013. According to NOAA program officials, a satellite data gap would result in less accurate and timely weather forecasts and warnings of extreme events—such as hurricanes, storm surges, and floods. Such degraded forecasts and warnings would endanger lives, property, and our nation’s critical infrastructures. Similarly, according to DOD officials, a gap in space-based weather monitoring capabilities could affect the planning, execution, and sustainment of U.S. military operations around the world. In our prior high-risk updates, we reported on NOAA’s efforts to mitigate the risk of a gap in its polar and geostationary satellite programs. With strong congressional support and oversight, NOAA has made significant progress in its efforts to mitigate the potential for gaps in weather satellite data on its geostationary weather satellite program. Specifically, the agency demonstrated strong leadership commitment to mitigating potential gaps in geostationary satellite data by revising and improving its gap mitigation/contingency plans. Previously, in December 2014, we reported on shortfalls in the satellite program’s gap mitigation/contingency plans and made recommendations to NOAA to address these shortfalls. For example, we noted that the plan did not sufficiently address strategies for preventing a launch delay, timelines and triggers to prevent a launch delay, and whether any of its mitigation strategies would meet minimum performance levels. NOAA agreed with these recommendations and released a new version of its geostationary satellite contingency plan in February 2015 that addressed the recommendations, thereby meeting the criterion for having an action plan. We rated capacity as partially met in our 2015 report due to concerns about NOAA’s ability to complete critical testing activities because it was already conducting testing on a round-the-clock, accelerated schedule. Since then, NOAA adjusted its launch schedule to allow time to complete critical integration and testing activities. In doing so, the agency demonstrated that it met the capacity criterion. NOAA has also met the criterion for demonstrating progress by mitigating schedule risks and successfully launching the satellite. In September 2013, we reported that the agency had weaknesses in its schedule- management practices on its core ground system and spacecraft. We made recommendations to address those weaknesses, which included sequencing all activities, ensuring there are adequate resources for the activities, and analyzing schedule risks. NOAA agreed with the recommendations and the Geostationary Operational Environmental Satellite-R series (GOES-R) program improved its schedule management practices. By early 2016, the program had improved the links between remaining activities on the spacecraft schedule, included needed schedule logic for a greater number of activities on the ground schedule, and included indications on the ground schedule that the results of a schedule risk analysis were used in calculating its durations. In addition, the program successfully launched the GOES-R satellite in November 2016. Oversight by Congress has been instrumental in reducing the risk of geostationary weather satellite gaps. For example, Subcommittees of the House Science, Space, and Technology Committee held multiple hearings to provide oversight of the satellite acquisition and the risk of gaps in satellite coverage. As a result, the agency now has a robust constellation of operational and backup satellites in orbit and has made significant progress in addressing the risk of a gap in geostationary data coverage. Accordingly, there is sufficient progress to remove this segment from the high-risk area. Additional information on Mitigating Gaps in Weather Satellite Data is provided on pages 19 and 430 of the high-risk report. Below are selected examples of areas where progress has been made. Strengthening Department of Homeland Security Management Functions. The Department of Homeland Security (DHS) continues to strengthen and integrate its management functions and progressed from partially met to met for the monitoring criterion. Since our 2015 high-risk update, DHS has strengthened its monitoring efforts for financial system modernization programs by entering into a contract for independent verification and validation services to help ensure that the modernization projects meet key requirements. These programs are key to effectively supporting the department’s financial management operations. Additionally, DHS continued to meet the criteria for leadership commitment and a corrective action plan. DHS’s top leadership has demonstrated exemplary support and a continued focus on addressing the department’s management challenges by, among other things, issuing 10 updated versions of DHS’s initial January 2011 Integrated Strategy for High Risk Management. The National Defense Authorization Act for Fiscal Year 2017 reinforces this focus with the inclusion of a mandate that the DHS Under Secretary for Management report to us every 6 months to demonstrate measurable, sustainable progress made in implementing DHS’s corrective action plans to address the high-risk area until we submit written notification of the area’s removal from the High-Risk List to the appropriate congressional committees. Similar provisions were included in the DHS Headquarters Reform and Improvement Act of 2015, the DHS Accountability Act of 2016, and the DHS Reform and Improvement Act. Additional information on this high-risk area is provided on page 354 of the report. Strategic Human Capital Management. This area progressed from partially met to met on leadership commitment. The Office of Personnel Management (OPM), agencies, and Congress have taken actions to improve efforts to address mission critical skills gaps. Specifically, OPM has demonstrated leadership commitment by publishing revisions to its human capital regulations in December 2016 that require agencies to, among other things, implement human capital policies and programs that address and monitor government- wide and agency-specific skills gaps. This initiative has increased the likelihood that skills gaps with the greatest operational effect will be addressed in future efforts. At the same time, Congress has provided agencies with authorities and flexibilities to manage the federal workforce and make the federal government a more accountable employer. For example, Congress included a provision in the National Defense Authorization Act for Fiscal Year 2016 to extend the probationary period for newly-hired civilian DOD employees from 1 to 2 years. This action is consistent with our 2015 reporting that better use of probationary periods gives agencies the ability to ensure an employee’s skills are a good fit for all critical areas of a particular job. Additional information on this high-risk area is provided on page 61 of the report. Transforming the Environmental Protection Agency’s Processes for Assessing and Controlling Toxic Chemicals. Overall, this high- risk area progressed from not met to partially met on two criteria— capacity and demonstrated progress—and continued to partially meet the criterion for monitoring due to progress in one program area. The Environmental Protection Agency’s (EPA) ability to effectively implement its mission of protecting public health and the environment is critically dependent on assessing the risks posed by chemicals in a credible and timely manner. EPA assesses these risks under a variety of actions, including the Integrated Risk Information System (IRIS) program and EPA’s Toxic Substances Control Act (TSCA) program. The IRIS program has made some progress on the capacity, monitoring, and demonstrated progress criteria. In terms of IRIS capacity, EPA has partially met this criterion by finalizing a Multi-Year Agenda to better assess how many people and resources should be dedicated to the IRIS program. In terms of IRIS monitoring, EPA has met this criterion in part by using a Chemical Assessment Advisory Committee to review IRIS assessments, among other actions. In terms of IRIS demonstrated progress, EPA has partially met this criterion as of January 2017 by issuing five assessments since fiscal year 2015. The Frank R. Lautenberg Chemical Safety for the 21st Century Act amended TSCA and was enacted on June 22, 2016. Passing TSCA reform may facilitate EPA’s effort to improve its processes for assessing and controlling toxic chemicals in the years ahead. The new law provides EPA with greater authority and the ability to take actions that could help EPA implement its mission of protecting human health and the environment. EPA officials stated that the agency is better positioned to take action to require chemical companies to report chemical toxicity and exposure data. Officials also stated that the new law gives the agency additional authorities, including the authority to require companies to develop new information relating to a chemical as necessary for prioritization and risk evaluation. Using both new and previously existing TSCA authorities should enhance the agency’s ability to gather new information as necessary to evaluate hazard and exposure risks. Continued leadership commitment from EPA officials and Congress will be needed to fully implement reforms. Additional work will also be needed to issue a workload analysis to demonstrate capacity, complete a corrective action plan, and demonstrate progress implementing the new legislation. Additional information on this high-risk area is provided on page 417 of the report. Managing Federal Real Property. The federal government continued to meet the criteria for leadership commitment, now partially meets the criterion for demonstrated progress, and made some progress in each of the other high-risk criteria. The Office of Management and Budget (OMB) issued the National Strategy for the Efficient Use of Real Property (National Strategy) on March 25, 2015, which directs Chief Financial Officer (CFO) Act agencies to take actions to reduce the size of the federal real property portfolio, as we recommended in 2012. In addition, in December 2016, two real property reform bills were enacted that could address the long-standing problem of federal excess and underutilized property. The Federal Assets Sale and Transfer Act of 2016 may help address stakeholder influence by establishing an independent board to identify and recommend five high-value civilian federal buildings for disposal within 180 days after the board members are appointed, as well as develop recommendations to dispose and redevelop federal civilian real properties. Additionally, the Federal Property Management Reform Act of 2016 codified the Federal Real Property Council (FRPC) for the purpose of ensuring efficient and effective real property management while reducing costs to the federal government. FRPC is required to establish a real property management plan template, which must include performance measures, and strategies and government-wide goals to reduce surplus property or to achieve better utilization of underutilized property. In addition, federal agencies are required to annually provide FRPC a report on all excess and underutilized property, and identify leased space that is not fully used or occupied. In addressing our 2016 recommendation to improve the reliability of real property data, GSA conducted an in-depth survey that focused on key real property data elements maintained in the Federal Real Property Profile, formed a working group of CFO Act agencies to analyze the survey results and reach consensus on reforms, and issued a memorandum to CFO Act agencies designed to improve the consistency and quality of real property data. The Federal Protective Service, which protects about 9,500 federal facilities, implemented our recommendation aimed at improving physical security by issuing a plan that identifies goals and describes resources that support its risk management approach. In addition, the Interagency Security Committee, a DHS-chaired organization, issued new guidance intended to make the most effective use of physical security resources. Additional information on this high-risk area is provided on page 77 of the report. Enforcement of Tax Laws. The Internal Revenue Service’s (IRS) continued efforts to enforce tax laws and address identity theft refund fraud (IDT) have resulted in the agency meeting one criterion for removal from the High-Risk List (leadership commitment) and partially meeting the remaining four criteria (capacity, action plan, monitoring, and demonstrating progress). IDT is a persistent and evolving threat that burdens legitimate taxpayers who are victims of the crime. It cost the U.S. Treasury an estimated minimum of $2.2 billion during the 2015 tax year. Congress and IRS have taken steps to address this challenge. IRS has deployed new tools and increased resources dedicated to identifying and combating IDT refund fraud. In addition, the Consolidated Appropriations Act, 2016, amended the tax code to accelerate Wage and Tax Statement (W-2) filing deadlines to January 31. We had previously reported that the wage information that employers report on Form W-2 was not available to IRS until after it issues most refunds. With earlier access to W-2 wage data, IRS could match such information to taxpayers’ returns and identify discrepancies before issuing billions of dollars of fraudulent IDT refunds. Such matching could also provide potential benefits for other IRS enforcement programs, such as preventing improper payments via the Earned Income Tax Credit. Additional information on this high- risk area is provided on page 500 of the report. In addition to being instrumental in supporting progress in individual high- risk areas, Congress also has taken actions to enact various statutes that, if implemented effectively, will help foster progress on high-risk issues government-wide. These include the following: Program Management Improvement Accountability Act: Enacted in December 2016, the act seeks to improve program and project management in federal agencies. Among other things, the act requires the Deputy Director of the Office of Management and Budget (OMB) to adopt and oversee implementation of government-wide standards, policies, and guidelines for program and project management in executive agencies. The act also requires the Deputy Director to conduct portfolio reviews to address programs on our High-Risk List. It further creates a Program Management Policy Council to act as an interagency forum for improving practices related to program and project management. The Council is to review programs on the High-Risk List and make recommendations to the Deputy Director or designee. We are to review the effectiveness of key efforts under the act to improve federal program management. Fraud Reduction and Data Analytics Act of 2015 (FRDA): FRDA, enacted in June 2016, is intended to strengthen federal anti-fraud controls, while also addressing improper payments. FRDA requires OMB to use our Fraud Risk Framework to create guidelines for federal agencies to identify and assess fraud risks, and then design and implement control activities to prevent, detect, and respond to fraud. Agencies, as part of their annual financial reports beginning in fiscal year 2017, are further required to report on their fraud risks and their implementation of fraud reduction strategies, which should help Congress monitor agencies’ progress in addressing and reducing fraud risks. To aid federal agencies in better analyzing fraud risks, FRDA requires OMB to establish a working group tasked with developing a plan for the creation of an interagency library of data analytics and data sets to facilitate the detection of fraud and the recovery of improper payments. This working group and the library should help agencies to coordinate their fraud detection efforts and improve their ability to use data analytics to monitor databases for potential improper payments. The billions of dollars of improper payments are a central part of the Medicare Program, Medicaid Program, and Enforcement of Tax Laws (Earned Income Tax Credit) high-risk areas. IT Acquisition Reform, Legislation known as the Federal Information Technology Acquisition Reform Act (FITARA): FITARA, enacted in December 2014, was intended to improve how agencies acquire IT and enable Congress to monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. FITARA includes specific requirements related to seven areas: the federal data center consolidation initiative, enhanced transparency and improved risk management, agency Chief Information Officer authority enhancements, portfolio review, expansion of training and use of IT acquisition cadres, government- wide software purchasing, and maximizing the benefit of the federal strategic sourcing initiative. Effective implementation of FITARA is central to making progress in the Improving the Management of IT Acquisitions and Operations government-wide area we added to the High-Risk List in 2015. In the 2 years since the last high-risk update, two areas—Mitigating Gaps in Weather Satellite Data and Management of Federal Oil and Gas Resources—have expanded in scope because of emerging challenges related to these overall high-risk areas. In addition, while progress is needed across all high-risk areas, particular areas need significant attention. While NOAA has made significant progress, as described earlier, in its geostationary weather satellite program, DOD has made limited progress in meeting its requirements for the polar satellite program. In 2010, when the Executive Office of the President decided to disband a tri-agency polar weather satellite program, DOD was given responsibility for providing polar-orbiting weather satellite capabilities in the early morning orbit. This information is used to provide updated information for weather observations and models. However, the department was slow to develop plans to replace the existing satellites that provide this coverage. Because DOD delayed establishing plans for its next generation of weather satellites, there is a risk of a satellite data gap in the early morning orbit. The last satellite that the department launched in 2014 called Defense Meteorological Satellite Program (DMSP)-19, stopped providing recorded data used in weather models in February 2016. A prior satellite, called DMSP-17, is now the primary satellite operating in the early morning orbit. However, this satellite, which was launched in 2006, is operating with limitations due to the age of its instruments. DOD had developed another satellite, called DMSP-20, but plans to launch that satellite were canceled after the department did not certify that it would launch the satellite by the end of calendar year 2016. The department conducted a requirements review and analysis of alternatives from February 2012 through September 2014 to determine the best way forward for providing needed polar-orbiting satellite environmental capabilities in the early morning orbit. In October 2016, DOD approved plans for its next generation of weather satellites, called the Weather System Follow-on—Microwave program, which will meet the department’s needs for satellite information on oceanic wind speed and direction to protect ships on the ocean’s surface. The department plans to launch a demonstration satellite in 2017 and to launch its first operational satellite developed under this program in 2022. However, DOD’s plans for the early morning orbit are not comprehensive. The department did not thoroughly assess options for providing its two highest-priority capabilities, cloud descriptions and area-specific weather imagery. These capabilities were not addressed due to an incorrect assumption about the capabilities that would be provided by international partners. The Weather System Follow-on—Microwave program does not address these two highest-priority capabilities and the department has not yet determined its long-term plans for providing these capabilities. As a result, the department will need to continue to rely on the older DMSP-17 satellite until its new satellite becomes operational in 2022, and it establishes and implements plans to address the high-priority capabilities that the new satellite will not address. Given the age of the DMSP-17 satellite and uncertainty on how much longer it will last, the department could face a gap in critical satellite data. In August 2016, DOD reported to Congress its near-term plans to address potential satellite data gaps. These plans include a greater reliance on international partner capabilities, exploring options to move a geostationary satellite over an affected region, and plans to explore options for acquiring and fielding new equipment, such as satellites and satellite components to provide the capabilities. In addition, the department anticipates that the demonstration satellite to be developed as a precursor to the Weather System Follow-on—Microwave program could help mitigate a potential gap by providing some useable data. However, these proposed solutions may not be available in time or be comprehensive enough to avoid near-term coverage gaps. Such a gap could negatively affect military operations that depend on weather data, such as long-range strike capabilities and aerial refueling. DOD needs to demonstrate progress on its new Weather Satellite Follow- on—Microwave program and to establish and implement plans to address the high-priority capabilities that are not included in the program. Additional information on Mitigating Gaps in Weather Satellite Data is provided on page 430 of the high-risk report. On April 20, 2010, the Deepwater Horizon drilling rig exploded in the Gulf of Mexico, resulting in 11 deaths, serious injuries, and the largest marine oil spill in U.S. history. In response, in May 2010, the Department of the Interior (Interior) first reorganized its offshore oil and gas management activities into separate offices for revenue collection, under the Office of Natural Resources Revenue, and energy development and regulatory oversight, under the Bureau of Ocean Energy Management, Regulation and Enforcement. Later, in October 2011, Interior further reorganized its energy development and regulatory oversight activities when it established two new bureaus to oversee offshore resources and operational compliance with environmental and safety requirements. The new Bureau of Ocean Energy Management (BOEM) is responsible for leasing and approving offshore development plans while the new Bureau of Safety and Environmental Enforcement (BSEE) is responsible for lease operations, safety, and enforcement. In 2011, we added Interior’s management of federal oil and gas resources to the High-Risk List based on three concerns: (1) Interior did not have reasonable assurance that it was collecting its share of billions of dollars of revenue from federal oil and gas resources; (2) Interior continued to experience problems hiring, training, and retaining sufficient staff to oversee and manage federal oil and gas resources; and (3) Interior was engaged in restructuring its oil and gas program, which is inherently challenging, and there were questions about whether Interior had the capacity to reorganize while carrying out its range of responsibilities, especially in a constrained resource environment. Immediately after reorganizing, Interior developed memorandums and standard operating procedures to define roles and responsibilities, and facilitate and formalize coordination between BOEM and BSEE. Interior also revised polices intended to improve its oversight of offshore oil and gas activities, such as new requirements designed to mitigate the risk of a subsea well blowout or spill. In 2013, we determined that progress had been made, because Interior had fundamentally completed reorganizing its oversight of offshore oil and gas activities. As a result, in 2013, we removed the reorganization segment from this high-risk area. However, in February 2016, we reported that BSEE had undertaken various reform efforts since its creation in 2011, but had not fully addressed deficiencies in its investigative, environmental compliance, and enforcement capabilities identified by investigations after the Deepwater Horizon incident. BSEE’s ongoing restructuring has made limited progress enhancing the bureau’s investigative capabilities. BSEE continues to use pre– Deepwater Horizon incident policies and procedures. Specifically, BSEE has not completed a policy outlining investigative responsibilities or updated procedures for investigating incidents—among the goals of BSEE’s restructuring, according to restructuring planning documents, and consistent with federal standards for internal control. The use of outdated investigative policies and procedures is a long-standing deficiency. Post– Deepwater Horizon incident investigations found that Interior’s policies and procedures did not require it to plan investigations, gather and document evidence, and ensure quality control, and determined that continuing to use them posed a risk to the effectiveness of bureau investigations. Without completing and updating its investigative policies and procedures, BSEE continues to face this risk. BSEE’s ongoing restructuring of its environmental compliance program reverses actions taken to address post–Deepwater Horizon incident concerns, and risks weakening the bureau’s environmental compliance oversight capabilities. In 2011, in response to two post–Deepwater Horizon incident investigations that found that BSEE’s predecessor’s focus on oil and gas development might have been at the expense of protecting the environment, BSEE created an environmental oversight division with region-based staff reporting directly to the headquarters- based division chief instead of regional management. This reporting structure was to help ensure that environmental issues received appropriate weight and consideration within the bureau. Under the restructuring, since February 2015, field-based environmental compliance staff again report to their regional directors. BSEE’s rationale for this action is unclear, as it was not documented or analyzed as part of the bureau’s restructuring planning. Under federal standards for internal control, management is to assess the risks posed by external and internal sources and decide what actions to take to mitigate them. Without assessing the risk of reversing its reporting structure, Interior cannot be sure that BSEE will have reasonable assurance that environmental issues are receiving the appropriate weight and consideration, as called for by post–Deepwater Horizon incident investigations. When we reviewed BSEE’s environmental compliance program, we found that the interagency agreements between Interior and EPA designed to coordinate water quality monitoring under the National Pollutant Discharge Elimination System were decades old. According to BSEE annual environmental compliance activity reports, the agreements may not reflect the agency’s current resources and needs. For example, a 1989 agreement stipulates that Interior shall inspect no more than 50 facilities on behalf of EPA per year, and shall not conduct water sampling on behalf of EPA. Almost 30 years later, after numerous changes in drilling practices and technologies, it is unclear whether inspecting no more than 50 facilities per year is sufficient to monitor water quality. Nevertheless, senior BSEE officials told us that the bureau has no plans to update its agreements with EPA, and some officials said that a previous headquarters-led effort to update the agreements was not completed because it did not sufficiently describe the bureau’s offshore oil and gas responsibilities. According to Standards for Internal Control in the Federal Government, as programs change and agencies strive to improve operational processes and adopt new technologies, management officials must continually assess and evaluate internal controls to ensure that control activities are effective and updated when necessary. BSEE’s ongoing restructuring has made limited progress in enhancing its enforcement capabilities. In particular, BSEE has not developed procedures with criteria to guide how it uses enforcement tools—such as warnings and fines—which are among the goals of BSEE’s restructuring, according to planning documents, and consistent with federal standards for internal control. BSEE restructuring plans state that the current lack of criteria causes BSEE to act inconsistently, which makes oil and gas industry operators uncertain about BSEE’s oversight approach and expectations. The absence of enforcement climate criteria is a long- standing deficiency. For example, post–Deepwater Horizon incident investigations recommended BSEE assess its enforcement tools and how to employ them to deter safety and environmental violations. Without developing procedures with defined criteria for taking enforcement actions, BSEE continues to face risks to the effectiveness of its enforcement capabilities. To enhance Interior’s oversight of oil and gas development, we recommended in February 2016 that the Secretary of the Interior direct the Director of BSEE to take the following nine actions as it continues to restructure. To address risks to the effectiveness of BSEE’s investigations, environmental compliance, and enforcement capabilities, we recommended that BSEE complete policies outlining the responsibilities of investigations, environmental compliance, and enforcement programs, and update and develop procedures to guide them. To enhance its investigative capabilities, we recommended that BSEE establish a capability to review investigation policy and collect and analyze incidents to identify trends in safety and environmental hazards; develop a plan with milestones for implementing the case management system for investigations; clearly communicate the purpose of BSEE’s investigations program to industry operators; and clarify policies and procedures for assigning panel investigation membership and referring cases of suspected criminal wrongdoing to the Inspector General. To enhance its environmental compliance capabilities, we conduct and document a risk analysis of the regional-based reporting structure of its Environmental Compliance Division, including actions to mitigate any identified risks; coordinate with the Administrator of the Environmental Protection Agency to consider the relevance of existing interagency agreements for monitoring operator compliance with National Pollutant Discharge Elimination System permits on the Outer Continental Shelf and, if necessary, update agreements to reflect current oversight needs; and develop a plan to address documented environmental oversight staffing needs. To enhance its enforcement capabilities, we recommended that BSEE develop a mechanism to ensure that it reviews the maximum daily civil penalty and adjusts it to reflect changes in the Consumer Price Index within the time frames established by statute. In its written comments, Interior agreed that additional reforms—such as documented policies and procedures—are needed to address offshore oil and gas oversight deficiencies, but Interior neither agreed nor disagreed with our specific recommendations. Additional information on Management of Federal Oil and Gas Resources is provided on page 136 of the high-risk report. Managing Risks and Improving VA Health Care. Since we added Department of Veterans Affairs (VA) health care to our High-Risk List in 2015, VA has acknowledged the significant scope of the work that lies ahead in each of the five areas of concern we identified: (1) ambiguous policies and inconsistent processes; (2) inadequate oversight and accountability; (3) information technology (IT) challenges; (4) inadequate training for VA staff; and (5) unclear resource needs and allocation priorities. It is imperative that VA maintain strong leadership support, and as the new administration sets its priorities, VA will need to integrate those priorities with its high-risk related actions. VA developed an action plan for addressing its high-risk designation, but the plan describes many planned outcomes with overly ambitious deadlines for completion. We are concerned about the lack of root cause analyses for most areas of concern, and the lack of clear metrics and needed resources for achieving stated outcomes. In addition, with the increased use of community care programs, it is imperative that VA’s action plan discuss the role of community care in decisions related to policies, oversight, IT, training, and resource needs. Finally, to help address its high-risk designation, VA should continue to implement our recommendations, as well as recommendations from others. While VA’s leadership has increased its focus on implementing our recommendations in the last 2 years, additional work is needed. We made 66 VA health care-related recommendations in products issued since the VA health care high- risk designation in February 2015, for a total of 244 recommendations from January 1, 2010, through December 31, 2016. VA has implemented 122 (about 50 percent) of the 244 recommendations, but over 100 recommendations remain open as of December 31, 2016 (with about 25 percent being open for 3 or more years). It is critical that VA implement our recommendations in a timely manner. Additional information on Managing Risks and Improving VA Health Care is provided on page 627 of the report. DOD Financial Management. The effects of DOD’s financial management problems extend beyond financial reporting and negatively affect DOD’s ability to manage the department and make sound decisions on mission and operations. In addition, DOD remains one of the few federal entities that cannot demonstrate its ability to accurately account for and reliably report its spending or assets. DOD’s financial management problems continue as one of three major impediments preventing us from expressing an opinion on the consolidated financial statements of the federal government. Sustained leadership commitment will be critical to DOD’s success in achieving financial accountability, and in providing reliable information for day-to-day management decision making as well as financial audit readiness. DOD needs to assure the sustained involvement of leadership at all levels of the department in addressing financial management reform and business transformation. In addition, further action is needed in the areas of capacity and action planning. Specifically, DOD needs to continue building a workforce with the level of training and experience needed to support and sustain sound financial management; continue to develop and deploy enterprise resource planning systems as a critical component of DOD’s financial improvement and audit readiness strategy, as well as strengthen automated controls or design manual workarounds for the remaining legacy systems to satisfy audit requirements and improve data used for day-to-day decision making; and effectively implement its Financial Improvement and Audit Readiness Plan and related guidance to focus on strengthening processes, controls, and systems to improve the accuracy, reliability, and reporting for its priority areas, including budgetary information and mission-critical assets. Further, DOD needs to monitor and assess the progress the department is making to remediate its internal control deficiencies. DOD should (1) require the military services to improve their policies and procedures for monitoring their corrective action plans for financial management-related findings and recommendations, and (2) improve its process for monitoring the military services’ audit remediation efforts by preparing a consolidated management summary that provides a comprehensive picture of the status of corrective actions throughout the department. DOD is continuing to work toward undergoing a full financial statement audit by fiscal year 2018; however, it expects to receive disclaimers of opinion on its financial statements for a number of years. A lack of comprehensive information on the corrective action plans limits the ability of DOD and Congress to evaluate DOD’s progress toward achieving audit readiness, especially given the short amount of time remaining before DOD is required to undergo an audit of the department-wide financial statements for fiscal year 2018. Being able to demonstrate progress in remediating its financial management deficiencies will be useful as the department works toward implementing lasting financial management reform to ensure that it can generate reliable, useful, and timely information for financial reporting as well as for decision making and effective operations. Moreover, stronger financial management would show DOD’s accountability for funds and would help it operate more efficiently. Additional information on DOD Financial Management is provided on page 280 of the high-risk report. Modernizing the U.S. Financial Regulatory System and the Federal Role in Housing Finance. Resolving the role of the federal government in housing finance will require leadership commitment and action by Congress and the administration. The federal government has directly or indirectly supported more than two-thirds of the value of new mortgage originations in the single-family housing market since the beginning of the 2007-2009 financial crisis. Mortgages with federal support include those backed by Fannie Mae and Freddie Mac, two large government-sponsored enterprises (the enterprises). Out of concern that their deteriorating financial condition threatened the stability of financial markets, the Federal Housing Finance Agency (FHFA) placed the enterprises into federal conservatorship in 2008, creating an explicit fiscal exposure for the federal government. As of September 2016, the Department of the Treasury (Treasury) had provided about $187.5 billion in funds as capital support to the enterprises, with an additional $258.1 billion available to the enterprises should they need further assistance. In accordance with the terms of agreements with Treasury, the enterprises had paid dividends to Treasury totaling about $250.5 billion through September 2016. More than 8 years after entering conservatorship, the enterprises’ futures remain uncertain and billions of federal dollars remain at risk. The enterprises have a reduced capacity to absorb future losses due to a capital reserve amount that falls to $0 by 2018. Without a capital reserve, any quarterly losses—including those due to market fluctuations and not necessarily to economic conditions—would require the enterprises to draw additional funds from Treasury. Additionally, prolonged conservatorships and a change in leadership at FHFA could shift priorities for the conservatorships, which in turn could send mixed messages and create uncertainties for market participants and hinder the development of the broader secondary mortgage market. For this reason, we said in November 2016 that Congress should consider legislation establishing objectives for the future federal role in housing finance, including the structure of the enterprises, and a transition plan to a reformed housing finance system that enables the enterprises to exit conservatorship. The federal government also supports mortgages through insurance or guarantee programs, the largest of which is administered by the Department of Housing and Urban Development’s Federal Housing Administration (FHA). During the financial crisis, FHA served its traditional role of helping to stabilize the housing market, but also experienced financial difficulties from which it only recently recovered. Maintaining FHA’s long-term financial health and defining its future role also will be critical to any effort to overhaul the housing finance system. We previously recommended that Congress or FHA specify the economic conditions that FHA’s Mutual Mortgage Insurance Fund would be expected to withstand without requiring supplemental funds. As evidenced by the $1.68 billion FHA received in 2013, the current 2 percent capital requirement for FHA’s fund may not always be adequate to avoid the need for supplemental funds under severe stress scenarios. Implementing our recommendation would be an important step not only in addressing FHA’s long-term financial viability, but also in clarifying FHA’s role. Additional information on Modernizing the U.S. Financial Regulatory System and the Federal Role in Housing Finance is provided on page 107 of the report. Pension Benefit Guaranty Corporation Insurance Programs. The Pension Benefit Guaranty Corporation (PBGC) is responsible for insuring the defined benefit pension plans of nearly 40 million American workers and retirees who participate in nearly 24,000 private sector plans. PBGC faces an uncertain financial future due, in part, to a long-term decline in the number of traditional defined benefit plans and the collective financial risk of the many underfunded pension plans that PBGC insures. PBGC’s financial portfolio is one of the largest of all federal government corporations and, at the end of fiscal year 2016, PBGC’s net accumulated financial deficit was over $79 billion—having more than doubled since fiscal year 2013. PBGC has estimated that, without additional funding, its multiemployer insurance program will likely be exhausted by 2025 as a result of current and projected pension plan insolvencies. The agency’s single- employer insurance program is also at risk due to the continuing decline of traditional defined benefit pension plans, increased financial risk and reduced premium payments. While Congress and PBGC have taken significant and positive steps to strengthen the agency over recent years, challenges related to PBGC’s funding and governance structure remain. Addressing the significant financial risk and governance challenges that PBGC faces requires additional congressional action. To improve the long-term financial stability of PBGC’s insurance programs, Congress should consider: (1) authorizing a redesign of PBGC’s single employer program premium structure to better align rates with sponsor risk; (2) adopting additional changes to PBGC’s governance structure—in particular, expanding the composition of its board of directors; (3) strengthening funding requirements for plan sponsors as appropriate given national economic conditions; (4) working with PBGC to develop a strategy for funding PBGC claims over the long term, as the defined benefit pension system continues to decline; and (5) enacting additional structural reforms to reinforce and stabilize the multiemployer system that balance the needs and potential sacrifices of contributing employers, participants and the federal government. Absent additional steps to improve PBGC’s finances, the long-term financial stability of the agency remains uncertain and the retirement benefits of millions of American workers and retirees could be at risk of dramatic reductions. Additional information on Pension Benefit Guaranty Corporation Insurance Programs is provided on page 609 of the report. Ensuring the Security of Federal Information Systems and Cyber Critical Infrastructure and Protecting the Privacy of Personally Identifiable Information. Federal agencies and our nation’s critical infrastructures—such as energy, transportation systems, communications, and financial services—are dependent on computerized (cyber) information systems and electronic data to carry out operations and to process, maintain, and report essential information. The security of these systems and data is vital to public confidence and the nation’s safety, prosperity, and well-being. However, safeguarding computer systems and data supporting the federal government and the nation’s critical infrastructure is a concern. We first designated information security as a government- wide high-risk area in 1997. This high-risk area was expanded to include the protection of critical cyber infrastructure in 2003 and protecting the privacy of personally identifiable information (PII) in 2015. Ineffectively protecting cyber assets can facilitate security incidents and cyberattacks that disrupt critical operations; lead to inappropriate access to and disclosure, modification, or destruction of sensitive information; and threaten national security, economic well-being, and public health and safety. In addition, the increasing sophistication of hackers and others with malicious intent, and the extent to which both federal agencies and private companies collect sensitive information about individuals, have increased the risk of PII being exposed and compromised. Over the past several years, we have made about 2,500 recommendations to agencies aimed at improving the security of federal systems and information. These recommendations would help agencies strengthen technical security controls over their computer networks and systems, fully implement aspects of their information security programs, and protect the privacy of PII held on their systems. As of October 2016, about 1,000 of our information security– related recommendations had not been implemented. In addition, the federal government needs, among other things, to improve its abilities to detect, respond to, and mitigate cyber incidents; expand efforts to protect cyber critical infrastructure; and oversee the protection of PII, among other things. Additional information on Ensuring the Security of Federal Information Systems and Cyber Critical Infrastructure and Protecting the Privacy of Personally Identifiable Information is provided on page 338 of the report. For 2017, we are adding three new areas to the High-Risk List. We, along with inspectors general, special commissions, and others, have reported that federal agencies have ineffectively administered Indian education and health care programs, and inefficiently fulfilled their responsibilities for managing the development of Indian energy resources. In particular, we have found numerous challenges facing Interior’s Bureau of Indian Education (BIE) and Bureau of Indian Affairs (BIA) and the Department of Health and Human Services’ (HHS) Indian Health Service (IHS) in administering education and health care services, which put the health and safety of American Indians served by these programs at risk. These challenges included poor conditions at BIE school facilities that endangered students, and inadequate oversight of health care that hindered IHS’s ability to ensure quality care to Indian communities. In addition, we have reported that BIA mismanages Indian energy resources held in trust and thereby limits opportunities for tribes and their members to use those resources to create economic benefits and improve the well-being of their communities. Congress recently noted, “through treaties, statutes, and historical relations with Indian tribes, the United States has undertaken a unique trust responsibility to protect and support Indian tribes and Indians.” In light of this unique trust responsibility and concerns about the federal government ineffectively administering Indian education and health care programs and mismanaging Indian energy resources, we are adding these programs as a high-risk issue because they uniquely affect tribal nations and their members. Federal agencies have performed poorly in the following broad areas: (1) oversight of federal activities; (2) collaboration and communication; (3) federal workforce planning; (4) equipment, technology, and infrastructure; and (5) federal agencies’ data. While federal agencies have taken some actions to address the 41 recommendations we made related to Indian programs, there are currently 39 that have yet to be fully resolved. We plan to continue monitoring federal efforts in these areas. To this end, we have ongoing work focusing on accountability for safe schools and school construction, and tribal control of energy delivery, management, and resource development. Education: We have identified weaknesses in how Indian Affairs oversees school safety and construction and in how it monitors the way schools use Interior funds. We have also found limited workforce planning in several key areas related to BIE schools. Moreover, aging BIE school facilities and equipment contribute to degraded and unsafe conditions for students and staff. Finally, a lack of internal controls and other weaknesses hinder Indian Affairs’ ability to collect complete and accurate information on the physical conditions of BIE schools. In the past 3 years, we issued three reports on challenges with Indian Affairs’ management of BIE schools in which we made 13 recommendations. Eleven recommendations below remain open. To help ensure that BIE schools provide safe and healthy facilities for students and staff, we made four recommendations which remain open, including that Indian Affairs ensure the inspection information it collects on BIE schools is complete and accurate; develop a plan to build schools’ capacity to promptly address safety and health deficiencies; and consistently monitor whether BIE schools have established required safety committees. To help ensure that BIE conducts more effective oversight of school spending, we made four recommendations which remain open, including that Indian Affairs develop a workforce plan to ensure that BIE has the staff to effectively oversee school spending; put in place written procedures and a risk-based approach to guide BIE in overseeing school spending; and improve information sharing to support the oversight of BIE school spending. To help ensure that Indian Affairs improves how it manages Indian education, we made five recommendations. Three recommendations remain open, including that Indian Affairs develop a strategic plan for BIE that includes goals and performance measures for how its offices are fulfilling their responsibilities to provide BIE with support; revise Indian Affairs’ strategic workforce plan to ensure that BIA regional offices have an appropriate number of staff with the right skills to support BIE schools in their regions; and develop and implement decision-making procedures for BIE to improve accountability for BIE schools. Health Care: IHS provides inadequate oversight of health care, both of its federally operated facilities and through the Purchase Referred Care program (PRC). Other issues include ineffective collaboration— specifically, IHS does not require its area offices to inform IHS headquarters if they distribute funds to local PRC programs using different criteria than the PRC allocation formula suggested by headquarters. As a result, IHS may be unaware of additional funding variation across areas. We have also reported that IHS officials told us that an insufficient workforce was the biggest impediment to ensuring patients could access timely primary care. In the past 6 years, we have made 12 recommendations related to Indian health care that remain open. Although IHS has taken several actions in response to our recommendations, such as improving the data collected for the PRC program and adopting Medicare-like rates for nonhospital services, much more needs to be done. To help ensure that Indian people receive quality health care, the Secretary of HHS should direct the Director of IHS to take the following two actions: (1) as part of implementing IHS’s quality framework, ensure that agency-wide standards for the quality of care provided in its federally operated facilities are developed, and systematically monitor facility performance in meeting these standards over time; and (2) develop contingency and succession plans for replacing key personnel, including area directors. To help ensure that timely primary care is available and accessible to Indians, IHS should: (1) develop and communicate specific agency- wide standards for wait times in federally-operated facilities, and (2) monitor patient wait times in federally-operated facilities and ensure that corrective actions are taken when standards are not met. To help ensure that IHS has meaningful information on the timeliness with which it issues purchase orders authorizing payment under the PRC program, and to improve the timeliness of payments to providers, we recommended that IHS: (1) modify IHS’s claims payment system to separately track IHS referrals and self-referrals, revise Government Performance and Results Act measures for the PRC program so that it distinguishes between these two types of referrals, and establish separate time frame targets for these referral types; and (2) better align PRC staffing levels and workloads by revising its current practices, where available, used to pay for PRC program staff. In addition, as HHS and IHS monitor the effect that new coverage options available to IHS beneficiaries through PPACA have on PRC funds, we recommend that IHS concurrently develop potential options to streamline requirements for program eligibility. To help ensure successful outreach efforts regarding PPACA coverage expansions, we recommended that IHS realign current resources and personnel to increase capacity to deal with enrollment in Medicaid and the exchanges, and prepare for increased billing to these payers. If payments for physician and other nonhospital services are capped, we recommended that IHS monitor patient access to these services. To help ensure a more equitable allocation of funds per capita across areas, we recommended that Congress consider requiring IHS to develop and use a new method for allocating PRC funds. To develop more accurate data for estimating the funds needed for the PRC program and improve IHS oversight, we recommended that IHS develop a written policy documenting how it evaluates the need for the PRC program, and disseminate it to area offices so they understand how unfunded services data are used to estimate overall program needs. We also recommended that IHS develop written guidance for PRC programs outlining a process to use when funds are depleted but recipients continue to need services. Energy: We have reported on issues with BIA oversight of federal activities, such as the length of time it takes the agency to review energy- related documents. We also reported on challenges with collaboration—in particular, while working to form an Indian Energy Service Center, BIA did not coordinate with key regulatory agencies, including the Department of the Interior’s Fish and Wildlife Service, the U.S. Army Corps of Engineers, and the Environmental Protection Agency. In addition, we found workforce planning issues at BIA contribute to management shortcomings that have hindered Indian energy development. Lastly, we found issues with outdated and deteriorating equipment, technology, and infrastructure, as well as incomplete and inaccurate data. In the past 2 years, we issued three reports on developing Indian energy resources in which we made 14 recommendations to BIA. All recommendations remain open. To help ensure BIA can verify ownership in a timely manner and identify resources available for development, we made two recommendations, including that Interior take steps to improve its geographic information system mapping capabilities. To help ensure BIA’s review process is efficient and transparent, we made two recommendations, including that Interior take steps to develop a documented process to track review and response times for energy-related documents that must be approved before tribes can develop energy resources. To help improve clarity of tribal energy resource agreement regulations, we recommended BIA provide additional guidance to tribes on provisions that tribes have identified to Interior as unclear. To help ensure that BIA streamlines the review and approval process for revenue-sharing agreements, we made three recommendations, including that Interior establish time frames for the review and approval of Indian revenue-sharing agreements for oil and gas, and establish a system for tracking and monitoring the review and approval process to determine whether time frames are met. To help improve efficiencies in the federal regulatory process, we made four recommendations, including that BIA take steps to coordinate with other regulatory agencies so the Service Center can serve as a single point of contact or lead agency to navigate the regulatory process. To help ensure that BIA has a workforce with the right skills, appropriately aligned to meet the agency’s goals and tribal priorities, we made two recommendations, including that BIA establish a documented process for assessing BIA’s workforce composition at agency offices. Congressional Actions Needed: It is critical that Congress maintain its focus on improving the effectiveness with which federal agencies meet their responsibilities to serve tribes and their members. Since 2013, we testified at six hearings to address significant weaknesses we found in the federal management of programs that serve tribes and their members. Sustained congressional attention to these issues will highlight the challenges discussed here and could facilitate federal actions to improve Indian education and health care programs, and the development of Indian energy resources. See pages 200-219 of the high-risk report for additional details on what we found. The federal government’s environmental liability has been growing for the past 20 years and is likely to continue to increase. For fiscal year 2016, the federal government’s estimated environmental liability was $447 billion—up from $212 billion for fiscal year 1997. However, this estimate does not reflect all of the future cleanup responsibilities facing federal agencies. Because of the lack of complete information and the often inconsistent approach to making cleanup decisions, federal agencies cannot always address their environmental liabilities in ways that maximize the reduction of health and safety risks to the public and the environment in a cost-effective manner. The federal government is financially liable for cleaning up areas where federal activities have contaminated the environment. Various federal laws, agreements with states, and court decisions require the federal government to clean up environmental hazards at federal sites and facilities—such as nuclear weapons production facilities and military installations. Such sites are contaminated by many types of waste, much of which is highly hazardous. Federal accounting standards require agencies responsible for cleaning up contamination to estimate future cleanup and waste disposal costs, and to report such costs in their annual financial statements as environmental liabilities. Per federal accounting standards, federal agencies’ environmental liability estimates are to include probable and reasonably estimable costs of cleanup work. Federal agencies’ environmental liability estimates do not include cost estimates for work for which reasonable estimates cannot currently be generated. Consequently, the ultimate cost of addressing the U.S. government’s environmental cleanup is likely greater than $447 billion. Federal agencies’ approaches to addressing their environmental liabilities and cleaning up the contamination from past activities are often influenced by numerous site-specific factors, stakeholder agreements, and legal provisions. We have also found that some agencies do not take a holistic, risk- informed approach to environmental cleanup that aligns limited funds with the greatest risks to human health and the environment. Since 1994, we have made at least 28 recommendations related to addressing the federal government’s environmental liability. These include 22 recommendations to the Departments of Energy (DOE) or Defense (DOD), 1 recommendation to OMB to consult with Congress on agencies’ environmental cleanup costs, and 4 recommendations to Congress to change the laws governing cleanup activities. Of these, 13 recommendations remain unimplemented. If implemented, these steps would improve the completeness and reliability of the estimated costs of future cleanup responsibilities, and lead to more risk-based management of the cleanup work. Of the federal government’s estimated $447 billion environmental liability, DOE is responsible for by far the largest share of the liability, and DOD is responsible for the second largest share. The rest of the federal government makes up the remaining 3 percent of the liability with agencies such as the National Aeronautics and Space Administration (NASA) and the Departments of Transportation, Veteran’s Affairs, Agriculture (USDA), and Interior holding large liabilities (see figure 2). Agencies spend billions each year on environmental cleanup efforts but the estimated environmental liability continues to rise. For example, despite billions spent on environmental cleanup, DOE’s environmental liability has roughly doubled from a low of $176 billion in fiscal year 1997 to the fiscal year 2016 estimate of $372 billion. In the last 6 years alone, DOE’s Office of Environmental Management (EM) has spent $35 billion, primarily to treat and dispose of nuclear and hazardous waste, and construct capital asset projects to treat the waste; however, EM’s portion of the environmental liability has grown over this same time period by over $90 billion, from $163 billion to $257 billion (see figure 3). Progress in addressing the U.S. government’s environmental liabilities depends on how effectively federal departments and agencies set priorities, under increasingly restrictive budgets, that maximize the risk reduction and cost-effectiveness of cleanup approaches. As a first step, some departments and agencies may need to improve the completeness of information about long-term cleanup responsibilities and their associated costs so that decision makers, including Congress, can consider the full scope of the federal government’s cleanup obligations. As a next step, certain departments, such as DOE, may need to change how they establish cleanup priorities. For example, DOE’s current practice of negotiating agreements with individual sites without considering other sites’ agreements or available resources may not ensure that limited resources will be allocated to reducing the greatest environmental risks, and costs will be minimized. We have recommended actions to federal agencies that, if implemented, would improve the completeness and reliability of the estimated costs of future cleanup responsibilities, and lead to more risk-based management of the cleanup work. These recommendations include the following. In 1994, we recommended that Congress amend certain legislation to require agencies to report annually on progress in implementing plans for completing site inventories, estimates of the total costs to clean up their potential hazardous waste sites, and agencies’ progress toward completing their site inventories and on their latest estimates of total cleanup costs. We believe these recommendations are as relevant, if not more so, today. In 2015, we recommended that USDA develop plans and procedures for completing its inventories of potentially contaminated sites. USDA disagreed with this recommendation. However, we continue to believe that USDA’s inventory of contaminated and potentially contaminated sites—in particular, abandoned mines, primarily on Forest Service land—is insufficient for effectively managing USDA’s overall cleanup program. Interior is also faced with an incomplete inventory of abandoned mines that it is working to improve. In 2006, we recommended that DOD develop, document, and implement a program for financial management review, assessment, and monitoring of the processes for estimating and reporting environmental liabilities. This recommendation has not been implemented. We have found in the past that DOE’s cleanup strategy is not risk based and should be re-evaluated. DOE’s decisions are often driven by local stakeholders and certain requirements in federal facilities agreements and consent decrees. In 1995, we recommended that DOE set national priorities for cleaning up its contaminated sites using data gathered during ongoing risk evaluations. This recommendation has not been implemented. In 2003, we recommended that DOE ask Congress to clarify its authority for designating certain waste with relatively low levels of radioactivity as waste incidental to reprocessing, and therefore not managed as high-level waste. In 2004, DOE received this specific authority from Congress for the Savannah River and Idaho Sites, thereby allowing DOE to save billions of dollars in waste treatment costs. The law, however, excluded the Hanford Site. More recently, in 2015, we found that DOE is not comprehensively integrating risks posed by National Nuclear Security Administration’s (NNSA) nonoperational contaminated facilities with EM’s portfolio of cleanup work. By not integrating nonoperational facilities from NNSA, EM is not providing Congress with complete information about EM’s current and future cleanup obligations as Congress deliberates annually about appropriating funds for cleanup activities. We recommended that DOE integrate its lists of facilities prioritized for disposition with all NNSA facilities that meet EM’s transfer requirements, and that EM should include this integrated list as part of the Congressional Budget Justification for DOE. DOE neither agreed nor disagreed with this recommendation. See pages 232-247 of the high-risk report for additional details on what we found. One of the most important functions of the U.S. Census Bureau (Bureau) is conducting the decennial census of the U.S. population, which is mandated by the Constitution and provides vital data for the nation. This information is used to apportion the seats of the U.S. House of Representatives; realign the boundaries of the legislative districts of each state; allocate billions of dollars in federal financial assistance; and provide social, demographic, and economic profiles of the nation’s people to guide policy decisions at each level of government. A complete count of the nation’s population is an enormous challenge as the Bureau seeks to control the cost of the census while it implements several new innovations and manages the processes of acquiring and developing new and modified IT systems supporting them. Over the past 3 years, we have made 30 recommendations to help the Bureau design and implement a more cost-effective census for 2020; however, only 6 of them had been fully implemented as of January 2017. The cost of the census, in terms of cost for counting each housing unit, has been escalating over the last several decennials. The 2010 Census was the costliest U.S. Census in history at about $12.3 billion, and was about 31 percent more costly than the $9.4 billion cost of the 2000 Census (in 2020 dollars). The average cost for counting a housing unit increased from about $16 in 1970 to around $92 in 2010 (in 2020 constant dollars). Meanwhile, the return of census questionnaires by mail (the primary mode of data collection) declined over this period from 78 percent in 1970 to 63 percent in 2010. Declining mail response rates—a key indicator of a cost-effective census—are significant and lead to higher costs. This is because the Bureau sends enumerators to each nonresponding household to obtain census data. As a result, nonresponse follow-up is the Bureau’s largest and most costly field operation. In many ways, the Bureau has had to invest substantially more resources each decade to match the results of prior enumerations. The Bureau plans to implement several new innovations in its design of the 2020 Census. In response to our recommendations regarding past decennial efforts and other assessments, the Bureau has fundamentally reexamined its approach for conducting the 2020 Census. Its plan for 2020 includes four broad innovation areas that it believes will save it over $5 billion (2020 constant dollars) when compared to what it estimates conducting the census with traditional methods would cost. The Bureau’s innovations include (1) using the Internet as a self-response option, which the Bureau has never done on a large scale before; (2) verifying most addresses using “in-office” procedures and on-screen imagery rather than street-by-street field canvassing; (3) re-engineering data collection methods such as by relying on an automated case management system; and (4) in certain instances, replacing enumerator collection of data with administrative records (information already provided to federal and state governments as they administer other programs). These innovations show promise for a more cost-effective head count. However, they also introduce new risks, in part, because they include new procedures and technology that have not been used extensively in earlier decennials, if at all. The Bureau is also managing the acquisition and development of new and modified IT systems, which add complexity to the design of the census. To help control census costs, the Bureau plans to significantly change the methods and technology it uses to count the population, such as offering an option for households to respond to the survey via the Internet or phone, providing mobile devices for field enumerators to collect survey data from households, and automating the management of field operations. This redesign relies on acquiring and developing many new and modified IT systems, which could add complexity to the design. These cost risks, new innovations, and acquisition and development of IT systems for the 2020 Census, along with other challenges we have identified in recent years, raise serious concerns about the Bureau’s ability to conduct a cost-effective enumeration. Based on these concerns, we have concluded that the 2020 Census is a high-risk area and have added it to the High-Risk List in 2017. To help the Bureau mitigate the risks associated with its fundamentally new and complex innovations for the 2020 Census, the commitment of top leadership is needed to ensure the Bureau’s management, culture, and business practices align with a cost-effective enumeration. For example, the Bureau needs to continue strategic workforce planning efforts to ensure it has the skills and competencies needed to support planning and executing the census. It must also rigorously test individual census-taking activities to provide information on their feasibility and performance, their potential for achieving desired results, and the extent to which they are able to function together under full operational conditions. We have recommended that the Bureau also ensure that its scheduling adheres to leading practices and be able to support a quantitative schedule risk assessment, such as by having all activities associated with the levels of resources and effort needed to complete them. The Bureau has stated that it has begun maturing project schedules to ensure that the logical relationships are in place and plans to conduct a quantitative risk assessment. We will continue to monitor the Bureau’s efforts. The Bureau must also improve its ability to manage, develop, and secure its IT systems. For example, the Bureau needs to prioritize its IT decisions and determine what information it needs in order to make those decisions. In addition, the Bureau needs to make key IT decisions for the 2020 Census in order to ensure they have enough time to have the production systems in place to support the end-to-end system test. To this end, we recommended the Bureau ensure that the methodologies for answering the Internet response rate and IT infrastructure research questions are determined and documented in time to inform key design decisions. Further, given the numerous and critical dependencies between the Census Enterprise Data Collection and Processing and 2020 Census programs, their parallel implementation tracks, and the 2020 Census’s immovable deadline, we recommended that the Bureau establish a comprehensive and integrated list of all interdependent risks facing the two programs, and clearly identify roles and responsibilities for managing this list. The Bureau stated that it plans to take actions to address our recommendations. It is also critical for the Bureau to have better oversight and control over its cost estimation process and we have recommended that the Bureau ensure its cost estimate is consistent with our leading practices. For example, the Bureau will need to, among other practices, document all cost-influencing assumptions; describe estimating methodologies used for each cost element; ensure that variances between planned and actual cost are documented, explained, and reviewed; and include a comprehensive sensitivity analysis, so that it can better estimate costs. We also recommended that the Bureau implement and institutionalize processes or methods for ensuring control over how risk and uncertainty are accounted for and communicated within its cost estimation process. The Bureau agreed with our recommendations, and we are currently conducting a follow-up audit of the Bureau’s most recent cost estimate and will determine whether the Bureau has implemented them. Sustained congressional oversight will be essential as well. In 2015 and 2016, congressional committees held five hearings focusing on the progress of the Bureau’s preparations for the decennial. Going forward, active oversight will be needed to ensure these efforts stay on track, the Bureau has needed resources, and Bureau officials are held accountable for implementing the enumeration as planned. We will continue monitoring the Bureau’s efforts to conduct a cost- effective enumeration. To this end, we have ongoing work focusing on such topics as the Bureau’s updated lifecycle cost estimate and the readiness of IT systems for the 2018 End-to-End Test. See pages 219–231 of the high-risk report for additional details on what we found. After we remove areas from the High-Risk List we continue to monitor them, as appropriate, to determine if the improvements we have noted are sustained and whether new issues emerge. If significant problems again arise, we will consider reapplying the high-risk designation. DOD’s Personnel Security Clearance Program is one former high-risk area that we continue to closely monitor in light of government-wide reform efforts. The Office of the Director of National Intelligence (ODNI) estimates that approximately 4.2 million federal government and contractor employees held or were eligible to hold a security clearance as of October 1, 2015. Personnel security clearances provide personnel with access to classified information, the unauthorized disclosure of which could, in certain circumstances, cause exceptionally grave damage to national security. High profile security incidents, such as the disclosure of classified programs and documents by a National Security Agency contractor and the OPM data breach of 21.5 million records, demonstrate the continued need for high quality background investigations and adjudications, strong oversight, and a secure IT process, which have been areas of long- standing challenges for the federal government. In 2005, we designated the DOD personnel security clearance program, as a high-risk area because of delays in completing background investigations and adjudications. We continued the high-risk designation in the 2007 and 2009 updates to our High-Risk List because of issues with the quality of investigation and adjudication documentation and because delays in the timely processing of security clearances continued. In our 2011 high-risk report, we removed DOD’s personnel security clearance program from the High-Risk List because DOD took actions to develop guidance to improve its adjudication process, develop and implement tools and metrics to assess quality of investigations and adjudications, and improve timeliness for processing clearances. We also noted that DOD continues to be a prominent player in the overall security clearance reform effort, which includes entities within the OMB, OPM, and ODNI that comprise the Performance Accountability Council (PAC) which oversees security clearance reform. The executive branch has also taken steps to monitor its security clearance reform efforts. The GPRA Modernization Act of 2010 requires OMB to report through a website—performance.gov—on long-term cross-agency priority goals, which are outcome-oriented goals covering a limited number of crosscutting policy areas, as well as goals to improve management across the federal government. Among the cross-agency priority goals, the executive branch identified security clearance reform as one of the key areas it is monitoring. Since removing DOD’s personnel security clearance program from the High-Risk List, the government’s overall reform efforts that began after passage of the Intelligence Reform and Terrorism Prevention Act of 2004 have had mixed progress, and key reform efforts have not yet been implemented. In the aftermath of the June 2013 disclosure of classified documents by a former National Security Agency contractor and the September 2013 shooting at the Washington Navy Yard, OMB issued, in February 2014, the Suitability and Security Processes Review Report to the President, a 120-day review of the government’s processes for granting security clearances, among other things. The 120-day review resulted in 37 recommendations, 65 percent of which have been implemented, as of October 2016, including the issuance of executive branch-wide quality assessment standards for investigations in January 2015. Additionally, the recommendations led to expanding DOD’s ability to continuously evaluate the continued eligibility of cleared personnel. However, other recommendations from the 120-day review have not yet been implemented. For example, the reform effort is still trying to fully implement the revised background investigation standards issued in 2012 and improve data sharing between local, state, and federal entities. In addition, the 120-day review further found that performance measures for investigative quality are neither standardized nor implemented consistently across the government, and that measuring and ensuring quality continues to be a challenge. The review contained three recommendations to address the development of quality metrics, but the PAC has only partially implemented those recommendations. We previously reported that the executive branch had developed some metrics to assess quality at different phases of the personnel security clearance process; however, those metrics had not been fully developed and implemented. The development of metrics to assess quality throughout the security clearance process has been a long-standing concern. Since the late 1990s we have emphasized the need to build and monitor quality throughout the personnel security clearance process. In 2009, we again noted that clearly defined quality metrics can improve the security clearance process by enhancing oversight of the time required to process security clearances and the quality of the investigation and adjudicative decisions. We recommended that OMB provide Congress with results of metrics on comprehensive timeliness and the quality of investigations and adjudications. According to ODNI, in October 2016, ODNI began implementation of a Quality Assessment and Reporting Tool to document customer issues with background investigations. The tool will be used to report on the quality of 5 percent of each executive branch agency’s background investigations. ODNI officials stated that they plan to develop metrics in the future as data are gathered from the tool, but did not identify a completion date for these metrics. Separately, the NDAA for Fiscal Year 2017, among other things, requires DOD to institute a program to collect and maintain data and metrics on the background investigation process, in the context of developing a system for performance of background investigations. The PAC’s effort to fully address the 120-day review and our recommendations on establishing metrics on the quality of investigations as well as DOD’s efforts to address the broader requirements in the NDAA for Fiscal Year 2017 remain open and will need to be a continued focus of the department moving forward in its effort to improve its management of the security clearance process. Further, in response to the 2015 OPM data breach, the PAC completed a 90-day review which led to an executive order establishing the National Background Investigations Bureau, within OPM, to replace the Federal Investigative Services and transferred responsibility to develop, maintain and secure new IT systems for clearances to DOD. Additionally, the Executive Order made DOD a full principal member of the PAC. The Executive Order also directed the PAC to review authorities, roles, and responsibilities, including submitting recommendations related to revising, as appropriate, executive orders pertaining to security clearances. This effort is ongoing. In addition to addressing the quality of security clearances and other goals and recommendations outlined in the 120-day and 90-day reviews and the government’s cross-agency priority goals, the PAC has the added challenge of addressing recent changes that may result from the NDAA for Fiscal Year 2017. Specifically, section 951 of the act requires the Secretary of Defense to develop an implementation plan for the Defense Security Service to conduct background investigations for certain DOD personnel—presently conducted by OPM—after October 1, 2017. The Secretary of Defense must submit the plan to the congressional defense committees by August 1, 2017. It also requires the Secretary of Defense and Director of OPM to develop a plan by October 1, 2017, to transfer investigative personnel and contracted resources to DOD in proportion to the workload if the plan for DOD to conduct the background investigations were implemented. It is unknown if these potential changes will impact recent clearance reform efforts. Given the history and inherent challenges of reforming the government- wide security clearance process, coupled with recent amendments to a governing Executive Order and potential changes arising from the NDAA for Fiscal Year 2017, we will continue reviewing critical functions for personnel security clearance reform and monitor the government’s implementation of key reform efforts. We have ongoing work assessing progress being made on the overall security clearance reform effort and in implementing a continuous evaluation process, a key reform effort considered important to improving the timeliness and quality of investigations. We anticipate issuing a report on the status of the government’s continuous evaluation process in the fall of 2017. Additionally, we have previously reported on the importance of securing federal IT systems and anticipate issuing a report in early 2017 that examines IT security at OPM and efforts to secure these types of critical systems. Continued progress in reforming personnel security clearances is essential in helping to ensure a federal workforce entrusted to protect U.S. government information and property, promote a safe and secure work environment, and enhance the U.S. government’s risk management approach. The high-risk assessment continues to be a top priority and we will maintain our emphasis on identifying high-risk issues across government and on providing insights and sustained attention to help address them, by working collaboratively with Congress, agency leaders, and OMB. As part of this effort, with the new administration and Congress in 2017 we hope to continue to participate in regular meetings with the incoming OMB Deputy Director for Management and with top agency officials to discuss progress in addressing high-risk areas. Such efforts have been critical for the progress that has been made. This high-risk update is intended to help inform the oversight agenda for the 115th Congress and to guide efforts of the administration and agencies to improve government performance and reduce waste and risks. Thank you, Chairman Johnson, Ranking Member McCaskill, and Members of the Committee. This concludes my testimony. I would be pleased to answer any questions. For further information on this testimony, please contact J. Christopher Mihm at [email protected] or (202) 512-6806. Contact points for the individual high-risk areas are listed in the report and on our high-risk website. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government is one of the world's largest and most complex entities: about $3.9 trillion in outlays in fiscal year 2016 funded a broad array of programs and operations. GAO's high-risk program identifies government operations with greater vulnerabilities to fraud, waste, abuse, and mismanagement or the need for transformation to address economy, efficiency, or effectiveness challenges. This biennial update describes the status of high-risk areas listed in 2015 and actions that are still needed to assure further progress, and identifies new high-risk areas needing attention by Congress and the executive branch. Solutions to high-risk problems potentially save billions of dollars, improve service to the public, and strengthen government performance and accountability. GAO uses five criteria to assess progress in addressing high-risk areas: (1) leadership commitment, (2) agency capacity, (3) an action plan, (4) monitoring efforts, and (5) demonstrated progress. Since GAO's last high-risk update, many of the 32 high-risk areas on the 2015 list have shown solid progress. Twenty-three high-risk areas, or two-thirds of all the areas, have met or partially met all five criteria for removal from the High-Risk List; 15 of these areas fully met at least one criterion. Progress has been possible through the concerted efforts of Congress and leadership and staff in agencies. For example, Congress enacted over a dozen laws since GAO's last report in February 2015 to help address high-risk issues. GAO removed 1 high-risk area on managing terrorism-related information, because significant progress had been made to strengthen how intelligence on terrorism, homeland security, and law enforcement is shared among federal, state, local, tribal, international, and private sector partners. Sufficient progress was made to remove segments of 2 areas related to supply chain management at the Department of Defense (DOD) and gaps in geostationary weather satellite data. Two high-risk areas expanded—DOD's polar-orbiting weather satellites and the Department of the Interior's restructuring of offshore oil and gas oversight. Several other areas need substantive attention including VA health care, DOD financial management, ensuring the security of federal information systems and cyber critical infrastructure, resolving the federal role in housing finance, and improving the management of IT acquisitions and operations. GAO is adding 3 areas to the High-Risk List, bringing the total to 34: Management of Federal Programs That Serve Tribes and Their Members. GAO has reported that federal agencies, including the Department of the Interior's Bureaus of Indian Education and Indian Affairs and the Department of Health and Human Services' Indian Health Service, have ineffectively administered Indian education and health care programs and inefficiently developed Indian energy resources. Thirty-nine of 41 GAO recommendations on this issue remain unimplemented. U.S. Government's Environmental Liabilities. In fiscal year 2016 this liability was estimated at $447 billion (up from $212 billion in 1997). The Department of Energy is responsible for 83 percent of these liabilities and DOD for 14 percent. Agencies spend billions each year on environmental cleanup efforts but the estimated environmental liability continues to rise. Since 1994, GAO has made at least 28 recommendations related to this area; 13 are unimplemented. The 2020 Decennial Census. The cost of the census has been escalating over the last several decennials; the 2010 Census was the costliest U.S. Census in history at about $12.3 billion, about 31 percent more than the 2000 Census (in 2020 dollars). The U.S. Census Bureau (Bureau) plans to implement several innovations—including IT systems—for the 2020 Census. Successfully implementing these innovations, along with other challenges, risk the Bureau's ability to conduct a cost-effective census. Since 2014, GAO has made 30 recommendations related to this area; however, only 6 have been fully implemented. GAO's 2017 High-Risk List This report contains GAO's views on progress made and what remains to be done to bring about lasting solutions for each high-risk area. Perseverance by the executive branch in implementing GAO's recommended solutions and continued oversight and action by Congress are essential to achieving greater progress.
The southwestern borderlands region contains many federally managed lands and also accounts for over 97 percent of all apprehensions of undocumented aliens by Border Patrol. Over 40 percent of the United States-Mexico border, or 820 linear miles, is managed by Interior’s land management agencies and the Forest Service. Each of these land management agencies has a distinct mission and set of responsibilities, which are, respectively, managing federal land for multiple uses, such as recreation, minerals, and the sustained yield of renewable resources; conserving the scenery, natural and historical objects, and wildlife of the national park system; preserving and enhancing fish, wildlife, plants, and their habitats; and managing resources to sustain the health, diversity, and productivity of the nation’s forests and grasslands to meet the needs of present and future generations. Border Patrol is organized into nine sectors along the southwestern border. Within each sector, there are stations with responsibility for defined geographic areas. Of the 41 stations in the borderlands region in the 9 southwestern border sectors, 26 have primary responsibility for the security of federal lands, according to Border Patrol sector officials. Apprehensions of undocumented aliens along the southwestern border increased steadily through the late 1990s, reaching a peak of 1,650,000 in fiscal year 2000. Since fiscal year 2006, apprehensions have declined, reaching a low of 540,000 in fiscal year 2009. This decrease has occurred along the entire border, with every sector reporting fewer apprehensions in fiscal year 2009 than in fiscal year 2006. The Tucson Sector, however, with responsibility for central and eastern Arizona, continues to have the largest number of apprehensions. Border Patrol shares with land managers data on apprehensions and drug seizures occurring on federal land, providing such information in several ways, including in regularly occurring meetings and e-mailed reports. Border Patrol measures its effectiveness at detecting and apprehending undocumented aliens by assessing the border security status for a given area. The two highest border security statuses—”controlled” and “managed”—are levels at which Border Patrol claims the capability to consistently detect entries when they occur; identify what the entry is and classify its level of threat (such as who is entering, what the entrants are doing, and how many entrants there are); effectively and efficiently respond to the entry; and bring the situation to an appropriate law enforcement resolution, such as an arrest. Areas deemed either “controlled” or “managed” are considered by Border Patrol to be under “operational control.” The volume of undocumented aliens crossing federal lands along the southwestern border can overwhelm law enforcement and resource protection efforts by federal land managers, thus highlighting the need for Border Patrol’s presence on and near these lands, according to DHS and land management agency officials. The need for the presence of both kinds of agencies on these borderlands has prompted consultation among DHS, Interior, and Agriculture to facilitate coordination between Border Patrol and the land management agencies. The departments have a stated commitment to foster better communication and resolve issues and concerns linked to federal land use or resource management. When operating on federal lands, Border Patrol has responsibilities under several federal land management laws, including the National Environmental Policy Act of 1969, Wilderness Act of 1964, and Endangered Species Act of 1973. Under these laws, Border Patrol must obtain permission or a permit from federal land management agencies before its agents can undertake certain activities on federal lands, such as maintaining roads and installing surveillance equipment. Because the land management agencies are responsible for ensuring compliance with land management laws, Border Patrol and the land management agencies have developed several mechanisms to coordinate their responsibilities. The most comprehensive of these is a national-level agreement—a memorandum of understanding signed in 2006 by the secretaries of Homeland Security, the Interior, and Agriculture—intended to provide consistent principles to guide their agencies’ activities on federal lands. At the local level, Border Patrol and land management agencies have also coordinated their responsibilities through various local agreements. Under key federal land management laws, Border Patrol, like all federal agencies, must obtain permission or a permit from the appropriate federal land management agency to conduct certain activities—such as road maintenance—on federal lands. These land management laws include, but are not limited to, the following: National Environmental Policy Act of 1969. Enacted in 1970, the National Environmental Policy Act’s purpose is to promote efforts that will prevent or eliminate damage to the environment, among other things. Section 102 requires federal agencies to evaluate the likely environmental effects of proposed projects using an environmental assessment or, if the projects would likely significantly affect the environment, a more detailed environmental impact statement evaluating the proposed project and alternatives. Environmental impact statements can be developed at either a programmatic level—where larger-scale, combined effects and cumulative effects can be evaluated and where overall management objectives, such as road access and use, are defined—or a project level, where the effects of a particular project in a specific place at a particular time are evaluated. If, however, the federal agency determines that activities of a proposed project fall within a category of activities the agency has already determined has no significant environmental effect— called a categorical exclusion—then the agency generally does not need to prepare an environmental assessment or an environmental impact statement. The agency may instead approve projects that fit within the relevant category by using one of the predetermined categorical exclusions, rather than preparing a project-specific environmental assessment or environmental impact statement. National Historic Preservation Act of 1966. The National Historic Preservation Act provides for the protection of historic properties—any prehistoric or historic district, site, building, structure, object, or properties of traditional religious and cultural importance to an Indian tribe, included, or eligible for inclusion in, the National Register of Historic Places. For all projects receiving federal funds or a federal permit, section 106 of the act requires federal agencies to take into account a project’s effect on any historic property. In accordance with regulations implementing the act, Border Patrol and land management agencies often incorporate compliance with the National Historic Preservation Act into their required evaluations of a project’s likely environmental effects under the National Environmental Policy Act. Thus, the agency or agencies must determine, by consulting with relevant federal, state, and tribal officials, whether a project or activity has the potential to affect historic properties. The purpose of the consultation is to identify historic properties affected by the project; assess the activity’s adverse effects on the historic properties; and seek ways to avoid, minimize, or mitigate any of those effects. Wilderness Act of 1964. The Wilderness Act of 1964 provides for federal lands to be designated as “wilderness areas,” which means that such lands are to be administered in such a manner that will leave them unimpaired for future use and enjoyment and to provide for their protection and the preservation of their wilderness character, among other goals. If Border Patrol proposes to patrol or install surveillance equipment on federal land that has been designated as wilderness, the agency must comply with the requirements and restrictions of the Wilderness Act of 1964, other laws establishing a particular wilderness area, and the relevant federal land management agency’s regulations governing wilderness areas. Section 4 of the act prohibits the construction of temporary roads or structures, as well as the use of motor vehicles, motorized equipment, and other forms of mechanical transport in wilderness areas, unless such construction or use is necessary to meet the minimum requirements for administration of the area, including for emergencies involving health and safety. Generally, the land management agencies have regulations that address the emergency and administrative use of motorized equipment and installations in the wilderness areas they manage. For example, under Fish and Wildlife Service regulations, the agency may authorize Border Patrol to use a wilderness area and prescribe conditions under which motorized equipment, structures, and installations may be used to protect the wilderness, including emergencies involving damage to property and violations of laws. Endangered Species Act of 1973. The purpose of the Endangered Species Act is to conserve threatened and endangered species and the ecosystems upon which they depend. Under section 7 of the act, if Border Patrol or the land management agencies determine that an activity B order Patrol intends to authorize, fund, or carry out may affect an animal or plant species listed as threatened or endangered, it may initiate eithe informal or a formal consultation with the Fish and Wildlife Service— which we refer to as a section 7 consultation—to ensure that its actions do not jeopardize the continued existence of such species or result in the destruction or adverse modification of its critical habitat. The agencies are to initiate informal consultation if they determine that an activity may affect—but is not likely to adversely affect—a listed species or critical habitat. To help implement key federal land management laws, Border Patrol and the land management agencies have developed several mechanisms to coordinate their responsibilities, including a national-level memorandum of understanding and local agreements. The national-level memorandum of understanding was signed in 2006 by the secretaries of Homeland Security, the Interior, and Agriculture and is intended to provide consistent principles to guide the agencies’ activities on federal lands along the U.S. borders. Such activities may include information sharing; placing and installing surveillance equipment, such as towers and underground sensors; using roads; providing Border Patrol with natural and cultural resource training; mitigating environmental impacts; and pursuing suspected undocumented aliens off road in wilderness areas. The memorandum also contains several provisions for resolving conflicts between Border Patrol and land managers, such as directing the agencies to resolve conflicts and delegate resolution authority at the lowest field operations level possible and to cooperate with each other to complete— in an expedited manner—all compliance that is required by applicable federal laws. We found several instances where Border Patrol stations and land management agencies have coordinated their responsibilities through use of this national-level memorandum of understanding. For example, Border Patrol and land managers in Arizona used the 2006 memorandum of understanding to set the terms for reporting Border Patrol off-road vehicle incursions in Organ Pipe Cactus National Monument, as well as for developing strategies for interdicting undocumented aliens closer to the border in the Cabeza Prieta National Wildlife Refuge and facilitating Border Patrol access in the San Bernardino National Wildlife Refuge. In addition, we found that guidance provided by the 2006 memorandum of understanding has facilitated local agreements between the Border Patrol and land management agencies. For example, for the Coronado National Forest in Arizona, Border Patrol and the Forest Service developed a coordinated strategic plan that sets forth conditions for improving and maintaining roads and locating helicopter landing zones in wilderness areas, among other issues. We also found that several other mechanisms have been used to facilitate interagency coordination. For example, Border Patrol and Interior established interagency liaisons, who have responsibility for facilitating coordination among their agencies. Border Patrol’s Public Lands Liaison Agent program directs each Border Patrol sector to designate an agent dedicated to interacting with Interior, Agriculture, or other governmental or nongovernmental organizations involved in land management issues. The role of these designated agents is to foster better communication; increase interagency understanding of respective missions, objectives and priorities; and serve as a central point of contact in resolving issues and concerns. Key responsibilities of these public lands liaison agents include implementing requirements of the 2006 memorandum of understanding and related agreements and monitoring any enforcement operations, issues, or activities related to federal land use or resource management. In addition, Interior established its own Southwest Border Coordinator, located at the Border Patrol Tucson Sector, to coordinate federal land management issues among Interior component agencies and with Border Patrol. The Forest Service also established a dedicated liaison position in the Tucson Sector to coordinate with Border Patrol, according to Forest Service officials. In addition to these liaison positions, a borderlands management task force provides an intergovernmental forum in the field for officials, including those from Border Patrol, the land management agencies, and other state and local governmental entities, to regularly meet and discuss challenges and opportunities for working together. The task force acts as a mechanism to address issues of security, safety, and resources among federal, tribal, state, and local governments located along the border. Border Patrol’s access has been limited on some federal lands along the southwestern border because of certain land management laws, according to patrol agents-in-charge in the borderlands region. Specifically, patrol agents-in-charge at 17 of the 26 stations that have primary responsibility for patrolling federal lands along the southwestern border reported that when they attempt to obtain a permit or permission to access portions of federal lands, delays and restrictions have resulted because they had to comply with land management laws. Despite these delays and restrictions, patrol agents-in- charge at 22 of the 26 Border Patrol stations reported that the border security status of their area of operation had not been affected by land management laws. Patrol agents-in-charge of 17 of 26 stations along the southwestern border reported that they have experienced delays and restrictions in patrolling and monitoring portions of federal lands because of various land management laws. Patrol agents-in-charge at 14 of the 26 Border Patrol stations along the southwestern border reported experiencing delays in getting a permit or permission from land managers to gain access to portions of federal land because of the time it took land managers to complete the requirements of the National Environmental Policy Act and the National Historic Preservation Act. These delays in gaining access had generally lessened agents’ ability to detect undocumented aliens in some areas, according to the patrol agents-in-charge. The 2006 memorandum of understanding directs the agencies to cooperate with each other to complete, in an expedited manner, all compliance required by applicable federal laws, but such cooperation has not always occurred, as shown in the following examples: Federal lands in Arizona. For the Border Patrol station responsible for patrolling certain federal lands in Arizona, the patrol agent-in-charge reported that it has routinely taken several months to obtain permission from land managers to move mobile surveillance systems. The patrol agent-in-charge told us that before permission can be granted, land managers generally must complete environmental and historic property assessments—as required by the National Environmental Policy and National Historic Preservation acts—on roads and sites needed for moving and locating such systems. For example, Border Patrol requested permission to move a mobile surveillance system to a certain area but by the time permission was granted—more than 4 months after the initial request—illegal traffic had shifted to other areas. As a result, Border Patrol was unable to move the surveillance system to the locale it desired, and during the 4-month delay, agents were limited in their ability to detect undocumented aliens within a 7-mile range that could have been covered by the system. The land manager for the federal land unit said that most of these lands and the routes through it have not had a historic property assessment, so when Border Patrol asks for approval to move equipment, such assessments must often be performed. Moreover, the federal land management unit has limited staff with numerous other duties. For example, the unit has few survey specialists who are qualified to perform environmental and historic property assessments. Thus, he explained, resources cannot always be allocated to meet Border Patrol requests in an expedited manner. Federal lands in New Mexico. In southwestern New Mexico, the patrol agents-in-charge of four Border Patrol stations reported that it may take 6 months or more to obtain permission from land managers to maintain and improve roads that Border Patrol needs on federal lands to conduct patrols and move surveillance equipment. According to one of these patrol agents-in-charge, for Border Patrol to obtain such permission from land managers, the land managers must ensure that environmental and historic property assessments are completed, which typically entails coordinating with three different land management specialists: a realty specialist to locate the site, a biologist to determine if there are any species concerns, and an archaeologist to determine if there are any historic sites. Coordinating schedules among these experts often takes a long time, according to a Border Patrol public-lands liaison. For example, one patrol agent-in-charge told us that a road in his jurisdiction needed to be improved to allow a truck to move an underground sensor, but the process for the federal land management agency to perform a historic property assessment and issue a permit for the road improvements took nearly 8 months. During this period, agents could not patrol in vehicles or use surveillance equipment to monitor an area that illegal aliens were known to use. The patrol agent-in-charge told us that performing such assessments on every road that might be used by Border Patrol would take substantial time and require assessing hundreds of miles of roads. According to federal land managers in the area, environmental and historic property specialists try to expedite support for Border Patrol as much as possible, but these specialists have other work they are committed to as well. Moreover, the office has not been provided any additional funding to increase personnel to be able to dedicate anyone in support of the Border Patrol to expedite such requests. For some of the stations, the delays patrol agents-in-charge reported could have been shortened if Border Patrol could have used its own resources to pay for, or perform, environmental and historic property assessments required by the National Environmental Policy Act and National Historic Preservation Act, according to patrol agents-in-charge and land managers with whom we spoke. On the Coronado National Forest, agency officials told us that Border Patrol and the Forest Service had entered into a cooperative agreement whereby in some situations Border Patrol pays for road maintenance and the necessary environmental and historic property assessments. According to two patrol agents-in-charge, the development of the Coronado National Forest coordinated strategic plan has helped the agencies shorten the time it takes to begin road maintenance because it allows Border Patrol to use its resources and therefore begin environmental and historic property assessments sooner. The Coronado National Forest border liaison added that without this agreement, Forest Service would have been unable to meet Border Patrol’s road maintenance needs in a timely fashion. In other situations, using Border Patrol resources to pay for or perform road maintenance may not always expedite access; instead, land managers and Border Patrol officials told us that a programmatic environmental impact statement should be prepared under the National Environmental Policy Act to help expedite access. For example, some patrol agents-in- charge, such as those in southwestern New Mexico, told us that to conduct environmental and historic property assessments on every road that agents might use, on a case-by-case basis, can take substantial time and require assessing hundreds, if not thousands, of miles of roads. Moreover, when agents request to move mobile surveillance systems, the request is often for moving such systems to a specific location, such as a 60-by-60- foot area on a hill. Some agents told us, however, that it takes a long time to obtain permission from land managers because environmental and historic property assessments must be performed on each specific site, as well as on the road leading to the site. As we stated earlier, National Environmental Policy Act regulations recognize that programmatic environmental impact statements—broad evaluations of the environmental effects of multiple Border Patrol activities, such as road use and technology installation, in a geographic area—could facilitate compliance with the act. By completing a programmatic environmental impact statement, Border Patrol and land management agencies could then subsequently prepare narrower, site-specific statements or assessments of proposed Border Patrol activities on federal lands, such as on a mobile surveillance system site alone, thus potentially expediting access. In our October 2010 report, we recommended that to help expedite Border Patrol’s access to federal lands, the agencies should, when and where appropriate, (a) enter into agreements that provide for Border Patrol to use its own resources to pay for or to conduct the required environmental and historic property assessments and (b) prepare programmatic National Environmental Policy Act documents for Border Patrol activities in areas where additional access may be needed. The agencies concurred with this recommendation. Patrol agents-in-charge for three stations reported that agents’ access to some federal lands was limited because of restrictions in the Wilderness Act on building roads and installing infrastructure, such as surveillance towers, in wilderness areas. For these stations, the access restrictions lessen the effectiveness of agents’ patrol and monitoring operations. However, land managers may grant permission for such activities if they meet the regulatory requirements for emergency and administrative use of motorized equipment and installations in wilderness areas. Land managers responsible for two wilderness areas are working with Border Patrol agents to provide additional access as allowed by the regulations for emergency and administrative use. For example, at the Cabeza Prieta National Wildlife Refuge, Wilderness Act restrictions have limited the extent to which Border Patrol agents can use vehicles for patrols and technology resources to detect undocumented aliens. The patrol agent-in-charge told us that the refuge has few roads and having an additional east-west road closer to the border would give Border Patrol more options in using its mobile surveillance system to monitor significant portions of the refuge that are susceptible to undocumented-alien traffic. Additionally, the patrol agent-in-charge told us that better access could benefit the natural resources of the refuge because it could lead to more arrests closer to the border—instead of throughout the refuge—and result in fewer Border Patrol off-road incursions. The refuge manager agreed that additional Border Patrol access may result in additional environmental protection, and he is working with Border Patrol to develop a strategy at the refuge that would allow Border Patrol to detect and apprehend undocumented aliens closer to the border. Further, the refuge manager in February 2010 gave permission for Border Patrol to install an SBInet tower on the refuge, which may also help protect the wilderness area. On the other hand, a land manager responsible for the Organ Pipe wilderness area has denied some Border Patrol requests for additional access and determined that additional Border Patrol access would not necessarily improve protection of natural resources. The patrol agent-in- charge of patrolling Organ Pipe, told us that when Border Patrol proposed placing an SBInet tower within the monument to help enable agents to detect undocumented aliens in a 30-square-mile range, the land manager denied the request because the proposed site was in a designated wilderness area. Instead, Border Patrol installed the tower in an area within the monument that is owned by the state of Arizona. At this site, however, the tower has a smaller surveillance range and cannot cover about 3 miles where undocumented aliens are known to cross, according to the patrol agent-in-charge, thus lessening Border Patrol’s ability to detect entries compared with the originally proposed site. In addition, the patrol agent-in-charge explained that because of the tower’s placement, when undocumented aliens are detected, agents have less time to apprehend them before they reach mountain passes, where it is easier to avoid detection. According to the land manager, Border Patrol did not demonstrate to him that the proposed tower site was critical, as compared with the alternative, and that agents’ ability to detect undocumented aliens would be negatively affected. Patrol agents-in-charge at five Border Patrol stations reported that as a result of consultations required by section 7 of the Endangered Species Act, agents have had to adjust the timing or specific locales of their ground and air patrols to minimize the patrols’ impact on endangered species and their critical habitats. Although some delays and restrictions have occurred, Border Patrol agents were generally able to adjust their patrols with little loss of effectiveness in their patrol operations. For example, for a Border Patrol station responsible for patrolling an area within the Coronado National Forest, the patrol agent-in-charge reported that a section 7 consultation placed restrictions on helicopter and vehicle access because of the presence of endangered species. Nevertheless, the patrol agent-in-charge told us the restrictions, which result in alternative flight paths, do not lessen the effectiveness of Border Patrol’s air operations. Moreover, according to the Forest Service District Ranger, since the area’s rugged terrain presents a constant threat to agents’ safety, Border Patrol agents have been allowed to use helicopters as needed, regardless of endangered species’ presence. In another instance, a patrol agent-in- charge told us that the Border Patrol wanted to improve a road within the area to provide better access, but because of the proposed project’s adverse effects on an endangered plant, road improvement could not be completed near a low point where water crossed the road. Border Patrol worked with Forest Service officials to improve 3 miles of a Forest Service road up to the low point, but the crossing itself—about 8 feet wide—along with 1.2 miles of road east of it was not improved. According to the patrol agent-in-charge, agents still patrol the area but must drive vehicles slowly because of the road’s condition east of the low point. Similarly, for the Border Patrol station responsible for patrolling the San Bernardino National Wildlife Refuge, the patrol agent-in-charge told us that vehicle access has been restricted in the refuge because vehicle use can threaten the habitat of certain threatened and endangered species. Since establishment of the refuge in 1982, locked gates have been in place on the refuge’s administrative roads. But Border Patrol station officials told us that in the last several years, with the increase in the number of agents assigned to the station, they wanted to have vehicle access to the refuge. The terms for vehicle access had to be negotiated with the refuge manager and the refuge manager agreed to place Border Patrol locks on refuge gates and to allow second-level Border Patrol supervisors, on a case-by-case basis, to determine whether vehicle access to the refuge is critical. If such a determination is made, a Border Patrol supervisor unlocks the gate and contacts refuge staff to inform them that access was granted through a specific gate. The patrol agent-in-charge told us that operational control has not been affected by these conditions for vehicle access. Despite the access delays and restrictions reported for 17 stations, most patrol agents-in-charge told us that the border security status of their jurisdictions has not been affected by land management laws. Instead, factors other than access delays or restrictions, such as the remoteness and ruggedness of the terrain or dense vegetation, have had the greatest effect on their abilities to achieve or maintain operational control. While four patrol agents-in-charge reported that delays and restrictions resulting from compliance with land management laws had negatively affected their ability to achieve or maintain operational control, they had either not requested resources to facilitate increased or timelier access or had their requests denied by senior Border Patrol officials, who said that other needs were greater priorities for the station or sector. Patrol agents-in-charge at 22 of the 26 stations with jurisdiction for federal lands along the southwestern border told us that their ability to achieve or maintain operational control in their areas of responsibility has been unaffected by land management laws; in other words, no portions of these stations’ jurisdictions have had their border security status, such as “controlled,” “managed,” or “monitored,” downgraded as a result of land management laws. Instead, for these stations, the primary factor affecting operational control has been the remoteness and ruggedness of the terrain or the dense vegetation their agents patrol and monitor. Specifically, patrol agents-in-charge at 18 stations told us that stark terrain features— such as rocky mountains, deep canyons, and dense brush—have negatively affected their agents’ abilities to detect and apprehend undocumented aliens. For example, a patrol agent-in-charge whose station is responsible for patrolling federal land in southern California told us that the terrain is so rugged that Border Patrol agents must patrol and pursue undocumented aliens on foot; even all-terrain vehicles specifically designed for off-road travel cannot traverse the rocky terrain. He added that because of significant variations in topography, such as deep canyons and mountain ridges, surveillance technology can also be ineffective in detecting undocumented aliens who hide there. Similarly, patrol agents-in- charge responsible for patrolling certain Fish and Wildlife Service land reported that dense vegetation limits agents’ ability to patrol or monitor much of the land. One agent explained that Border Patrol’s technology resources were developed for use in deserts, where few terrain features obstruct surveillance, whereas the vegetation in these areas is dense and junglelike. The majority of patrol agents-in-charge also told us that the most important resources for achieving and maintaining operational control on federal lands along the southwestern border are (1) a sufficient number of agents; (2) additional technology resources, such as mobile surveillance systems; and (3) tactical infrastructure, such as vehicle and pedestrian fencing. For example, in the remote areas of one national wildlife refuge, a patrol agent-in-charge told us that even with greater access in the refuge, he would not increase the number of agents patrolling it to gain improvements in operational control. Instead, he said, deploying additional technology resources, such as a mobile surveillance system, would be more effective in achieving operational control of the area because such systems would assist in detecting undocumented aliens while allowing agents to maintain their presence in and around a nearby urban area, where the vast majority of illegal entries occur. His view, and those of other patrol agents-in-charge whom we interviewed, is underscored by Border Patrol’s operational assessments—twice yearly planning documents that stations and sectors use to identify impediments to achieving or maintaining operational control and to request resources needed to achieve or maintain operational control. In these assessments, stations have generally requested additional personnel or technology resources for their operations on federal lands. Delays or restrictions in gaining access have generally not been identified in operational assessments as an impediment to achieving or maintaining operational control for the 26 stations along the southwestern border. Of the 26 patrol agents-in-charge we interviewed, 4 reported that delays and restrictions in gaining access to federal lands had negatively affected their ability to achieve or maintain operational control. However, 2 of these stations have not requested any additional resources as part of Border Patrol’s operational assessments and the other two that did request additional resources were denied these requests because of other higher agency priorities. For example, the patrol agent-in-charge responsible for a land unit in southwestern New Mexico told us that operational control in a remote area of his jurisdiction is partly affected by the scarcity of roads. Having an additional road in this area would allow his agents to move surveillance equipment to an area that, at present, is rarely monitored. However, according to a supervisory agent for the sector, station officials did not request additional access through Border Patrol’s operational assessments for this additional road, and land managers in this area told us they would be willing to work with Border Patrol to facilitate such access, if requested. Similarly, the patrol agent-in-charge at a Border Patrol station responsible for patrolling another federal land unit in Arizona reported that his ability to achieve operational control is also affected by a shortage of east-west roads in the unit. He told us that some of his area could potentially reach operational control status if there was an additional east-west road. In this case, the Border Patrol station did request an additional east-west road from the land management agency, but the land manager denied the request because the area is designated as wilderness, according to the patrol agent-in-charge. As a result of this denial, the patrol agent-in- charge did not pursue a request for resources through the Border Patrol’s operational assessment. The land manager told us that he would be willing to work with Border Patrol to facilitate additional access if it could be shown that such access would help increase deterrence and apprehensions closer to the border. For the other two stations reporting that federal land management laws had negatively affected their ability to achieve or maintain operational control, Border Patrol sector or headquarters officials had denied the stations’ requests for resources to facilitate increased or timelier access— typically for budgetary reasons. For example, one patrol agent-in-charge reported that 1.3 miles of border in her area of responsibility are not at operational control because, unlike most other border areas, it has no access road directly on the border. Further, she explained, the rough terrain has kept Border Patrol from building a road on the border. Instead, a road would need to be created in an area designated as wilderness. According to the patrol agent-in-charge, her station asked Border Patrol’s sector office for an access road, and the request was submitted as part of the operational requirements-based budgeting program. As of July 2010, the request had not been approved because of budgetary constraints, according to the agent-in-charge. In addition, another patrol agent-in- charge told us, few roads lie close to the river that runs through his area of responsibility. As a result, his agents have to patrol and monitor nearly 1 mile north of the international border, much closer to urban areas. According to officials with Border Patrol’s relevant sector office, they have been using the operational assessments for several years to request an all- weather road, but approval and funding have not been granted by Border Patrol’s headquarters. Information sharing and communications among Border Patrol, Interior, and Forest Service have generally increased over the last several years, according to Border Patrol and federal land law enforcement officials in the Tucson sector, but critical gaps remained in implementing interagency agreements. As we stated earlier, DHS, Interior, and Agriculture had established the 2006 a memorandum of understanding in part to facilitate the exchange of threat information on federal lands; and a 2008 memorandum of understanding among these agencies established a common secure radio encryption key for communicating information on daily operations. The lack of early and continued consultation among agencies to implement these agreements has resulted in critical information-sharing gaps that compromise officer safety and a timely and effective coordinated law enforcement response to illegal activity on federal lands. Specifically, Border Patrol officials in the Tucson sector did not consult with federal land management agencies before discontinuing dissemination of daily situation reports that federal land law enforcement officials relied on for a common awareness of the types and locations of illegal activities observed on federal borderlands. Implementation of the 2006 memorandum of understanding’s requirement for DHS, Interior, and Agriculture to establish a framework for sharing threat information could help ensure that law enforcement officials operating on federal lands have access to threat information they consider necessary to efficiently and effectively complete their missions. In addition, DHS, Interior, and Agriculture officials did not coordinate to ensure that all federal law enforcement partners could monitor secure radio communications regarding daily operations on federal lands in the Tucson sector. Specifically, in 2009 Border Patrol changed the secure radio encryption key used by Border Patrol agents in the Tucson sector to communicate on daily operations without consulting with Interior or Agriculture. In order to remedy the communication challenges, Border Patrol headquarters issued guidance in April 2010 instructing that secure radio communications of information regarding daily operations should be switched from the new encryption key back to the common encryption key compatible with Interior and Agriculture. However, since the Border Patrol’s April 2010 guidance applies only to the Tucson sector, secure radio compatibility problems could persist in other Border Patrol sectors. In our November 2010 report, we recommended that DHS, Interior, and Agriculture take necessary action to ensure that personnel at all levels of each agency conduct early and continued consultations to implement provisions of the 2006 memorandum of understanding, including the coordination of threat information for federal lands that is timely and actionable, and the coordination of future plans for upgrades of compatible radio communications used for daily law enforcement operations on federal lands. The agencies concurred with these recommendations. In January 2011, Customs and Border Protection issued a memorandum to all Border Patrol division chiefs and chief patrol agents emphasizing the importance of Interior and Agriculture partnerships to address border security threats on federal lands. This action is a positive step toward implementing our recommendations and we encourage DHS, Interior, and Agriculture to take the additional steps necessary to monitor and uphold implementation of the existing interagency agreements in order to enhance border security on federal lands. Chairman Chaffetz, Chairman Bishop, Ranking Member Tierney, Ranking Member Grijalva, and Members of the Subcommittees, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. For further information about this testimony, please contact Anu K. Mittal at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Richard Stana, Director; Elizabeth Erdmann, Assistant Director; Lucinda Ayers, Assistant Director; Nathan Anderson; and Richard P. Johnson also made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
To stem the flow of illegal traffic from Mexico into the United States over the last 5 years along the U.S. southwestern border, the Border Patrol has nearly doubled the number of agents on patrol, constructed hundreds of miles of border fences, and installed a variety of surveillance equipment. About 40 percent of these border lands are managed by the Departments of the Interior and Agriculture, and coordination and cooperation between Border Patrol and land management agencies is critical to ensure national security. As requested, this statement summarizes GAO's findings from two reports issued on southwest border issues in the fall of 2010. The first report, GAO-11-38 , focused on the key land management laws that Border Patrol must comply with and how these laws affect the agency's operations. The second report, GAO-11-177 , focused on the extent to which Border Patrol and land management agencies' law enforcement units share threat information and communications. When operating on federal lands, Border Patrol must comply with the requirements of several federal land management laws, including the National Environmental Policy Act, Wilderness Act, and Endangered Species Act. Border Patrol must obtain permission or a permit from federal land management agencies before agents can undertake operations, such as maintaining roads and installing surveillance equipment, on federal lands. To fulfill these requirements, Border Patrol generally coordinates with land management agencies through national and local interagency agreements. The most comprehensive agreement is a 2006 memorandum of understanding between the Departments of Homeland Security, Agriculture, and the Interior that is intended to guide Border Patrol activities on federal lands. Border Patrol's access to some federal lands along the southwestern border has been limited because of certain land management laws, according to 17 of 26 patrol agents-in-charge that GAO surveyed. For example, these patrol agents-in-charge reported that implementation of these laws had resulted in delays and restrictions in their patrolling and monitoring operations. Specifically, 14 patrol agents-in-charge reported that they had been unable to obtain a permit or permission to access certain areas in a timely manner because of the time it takes for land managers to conduct required environmental and historic property assessments. The 2006 memorandum of understanding directs the agencies to cooperate and complete, in an expedited manner, all compliance required by applicable federal laws, but such cooperation has not always occurred. For example, when Border Patrol requested permission to move surveillance equipment, it took the land manager more than 4 months to conduct the required historic property assessment and grant permission, but by then illegal traffic had shifted to other areas. Despite the access delays and restrictions experienced by these stations, 22 of the 26 patrol agents-in-charge reported that the overall security status of their jurisdiction had not been affected by land management laws. Instead, factors such as the remoteness and ruggedness of the terrain have had the greatest effect on their ability to achieve operational control in these areas. Four patrol agents-in-charge reported that delays and restrictions had affected their ability to achieve or maintain operational control, but they either had not requested resources for increased or timelier access or their requests had been denied by senior Border Patrol officials because of higher priority needs of the agency. Information sharing and communication among the agencies have increased in recent years, but critical gaps remain in implementing interagency agreements. Agencies established forums and liaisons to exchange information; however, in the Tucson sector, agencies did not coordinate to ensure that federal land law enforcement officials had access to threat information and compatible secure radio communications for daily operations. GAO found that enhanced coordination in these areas could better ensure officer safety and a more efficient law enforcement response to illegal activity along the southwest border. This statement contains no new recommendations. In its 2010 reports, GAO made several recommendations to the Departments of Agriculture, Homeland Security, and the Interior to help expedite Border Patrol's access to federal lands and recommended that the agencies take actions to improve communication and information sharing. The departments concurred with GAO's recommendations in those reports.
Homeownership counseling refers to prepurchase and postpurchase counseling of homeowners and is a subset of housing counseling, which can also include assistance to renters and homeless populations. Prepurchase counseling generally refers to counseling for potential homebuyers to learn about whether and when to buy a home and how to manage a mortgage, budget for repairs, and fulfill other financial responsibilities of being a homeowner. Postpurchase counseling primarily refers to foreclosure mitigation counseling, which focuses on helping financially distressed homeowners avoid foreclosure by working with lenders to cure mortgage delinquency but can also include subjects such as home maintenance. Counseling can take place in person, over the telephone, via a self-study computer module, or with a workbook, and can vary in length from a single session to several sessions spread over a period of weeks or months. The federal government funds homeownership counseling through a number of programs at HUD, Treasury, the Department of Defense, and the Department of Veterans Affairs. Congress has also provided targeted support for foreclosure mitigation counseling. For example, in recent years, Congress has appropriated funds to the National Foreclosure Mitigation Counseling (NFMC) Program, which was designed to rapidly expand the availability of foreclosure mitigation counseling. NFMC is administered by NeighborWorks® America, a government-chartered, nonprofit corporation with a national network of affiliated organizations, which competitively distributes NFMC funds to other recipients. The limited body of literature on homeownership counseling does not provide conclusive findings on the impact of all types of homeownership counseling. Some studies suggest that foreclosure mitigation counseling can be effective in improving mortgage outcomes (e.g., remaining current on mortgage payments versus defaulting or losing the home to foreclosure). However, findings on prepurchase counseling are less clear. Research on homeownership counseling is limited in part because of data limitations and other challenges. Recent research on foreclosure mitigation counseling suggests that it can help struggling mortgage borrowers avoid foreclosure and prevent them from lapsing back into default, especially if the counseling occurs early in the foreclosure process. A 2010 evaluation of NFMC found that homeowners who received counseling under the program were more likely to receive loan modifications and remain current on their mortgages after counseling, compared with a group of non-NFMC borrowers with similar observable characteristics. Specifically, the authors estimated that borrowers who received NFMC counseling were 1.7 times more likely to “cure” their foreclosure (i.e., be removed from the foreclosure process by their mortgage servicer) than borrowers who did not receive NFMC counseling. The authors also estimated that loan modifications received by NFMC clients in the first 2 years of the program resulted in monthly mortgage payments that averaged $267 less than they would have paid without the program’s help. Additionally, the study found that in 2008, borrowers who received NFMC counseling before a loan modification had an estimated 53 percent better chance of bringing their mortgages current than borrowers who did not receive premodification counseling. Other studies of foreclosure prevention counseling have also found that the timing of the counseling was critical and that the earlier in the foreclosure process borrowers received counseling, the more likely they were to have a positive outcome. The findings on the impact of prepurchase counseling are less clear. For example, a 2001 study analyzed data on the performance of about 40,000 mortgages made under a Freddie Mac program for low- to moderate-income homebuyers, a large majority of whom received prepurchase counseling. The authors compared the loan performance of program participants who received different types of prepurchase counseling to the loan performance of participants who did not. The study found that borrowers who underwent individual and classroom counseling were 34 and 26 percent less likely, respectively, to become 90 days delinquent on their mortgages than similar borrowers who did not undergo counseling. However, subsequent studies have found either no effect on loan performance or effects that were potentially attributable to other factors. For example, a 2008 study of about 2,700 mortgage borrowers found that prepurchase counseling had no effect on a borrower’s propensity to default. A 2009 study examined a legislated pilot program in 10 Illinois ZIP codes that mandated prepurchase counseling for mortgage applicants whose credit scores were relatively low or who chose higher-risk mortgage products such as interest-only loans. Although the authors found that mortgage default rates for the counseled low-credit score borrowers were lower than those for a comparison group, the authors attributed this result primarily to lenders tightening their screening of borrowers in response to stricter regulatory oversight. Additional empirical research on the impact of housing counseling is under way at HUD and Fannie Mae. HUD’s Office of Policy Development and Research issued a broad overview of the housing counseling industry in 2008 and is currently conducting two studies on mortgage outcomes related to foreclosure mitigation and prepurchase counseling programs. The foreclosure mitigation study will follow 880 individuals and evaluate mortgage outcomes 12 months after counseling ends. HUD officials said that they expected the study to be published in 2012. The prepurchase counseling study will track 1,500 to 2,000 individuals who receive different types of counseling (one-on-one, group, Internet, or telephone) or no counseling. HUD officials said that they expected data collection for this study to begin in 2012. In addition, Fannie Mae is conducting both prepurchase and postpurchase counseling studies. According to Fannie Mae officials, the prepurchase study will track over a 2-year period the loan performance of borrowers who received counseling prior to purchasing a home. The postpurchase study will evaluate the impact of telephone counseling on existing homeowners who receive loan modifications through HAMP. Conducting research on homeownership counseling outcomes is challenging for a variety of reasons, and limitations in the methodologies used in existing studies make it difficult to generalize the results or compare outcomes across various studies. According to housing counseling researchers we spoke with, the primary barrier in the study of housing counseling is a lack of data. Long-term data on counseling outcomes are limited because of the difficulty of tracking counseling recipients after the counseling ends. In addition, many counseling agencies are hesitant to request sensitive personal information from clients. One researcher we spoke with told us that the ability to track loan performance over time is critical to an effective assessment of housing counseling programs. For this reason, some counseling researchers have begun working with lenders and mortgage servicers to access information on the payment status (e.g., current or delinquent) of counseling recipients and the long-term outcomes of their mortgages. Another limitation of the current research is the lack of experimental research design, which is considered the best approach for evaluating differences in an intervention such as counseling and comparing it to no intervention. We did not identify any published studies that evaluated homeownership counseling using an experimental design. For this and other reasons, researchers have been hesitant to draw firm conclusions from the published literature. For example, differences among counseling programs—in terms of curriculum, intervention method (e.g., one-on-one, telephone, or classroom), level of intervention (e.g., intensity or amount of time spent counseling), and outcome measures—make it difficult to draw broader conclusions about the impact of housing counseling. Establishing meaningful measures of the impact of homeownership counseling programs is also a significant challenge. Our recent evaluation of Treasury’s Financial Education and Counseling Pilot Program illustrates this point. As a condition of receiving grant funds under the program, grantees are required to report on the results of five performance goals within 6 months of disbursement and annually thereafter. We found that some grantees were calculating the results of their impact measures in erroneous or misleading ways or were not fully capturing meaningful information, potentially limiting the usefulness of these data for assessing program effectiveness. For example, one grantee inaccurately calculated the average percentage increase in prospective homebuyer savings. According to the grantee’s calculation, a participant who began a financial education and counseling program with no savings but subsequently saved $500 was shown to have a 50,000 percent increase in savings. In fact, a percentage increase cannot be meaningfully calculated from zero savings because any percentage increase on zero is infinite. We also identified alternatives to the methods of calculating impact measures that the grantees were using. For example, we noted that instead of just measuring changes in clients’ savings, it might be advantageous to focus on net savings—that is, savings minus debt—to provide a more complete picture of an individual’s financial situation. Treasury officials told us that they had discussed the specific impact measures with each grantee in the pilot program but had not provided guidance on how to calculate the results. We recommended that Treasury provide additional guidance or technical assistance to the grantees on how to accurately and meaningfully calculate the results of the impact measures. Treasury stated that it concurred with the observations in our report and plans to provide grantees with supplemental guidance on impact measures before the next reporting deadline. In prior work, we found shortcomings in HUD’s and Treasury’s implementation of homeownership counseling requirements for the HECM and HAMP programs. In 2009, we evaluated HUD’s implementation of the counseling requirements associated with the HECM program and found that HUD’s internal controls did not provide reasonable assurance that counseling providers were complying with the program requirements. GAO’s undercover participation in 15 HECM counseling sessions found that while the counselors generally conveyed accurate and useful information, none of the counselors covered all of the topics required by HUD, and some overstated the length of the sessions in HUD records. For example, 7 of the 15 counselors did not discuss required information about alternatives to HECMs, and 6 of the 15 counselors overstated the length of the session. HUD had several internal controls designed to help ensure that counselors conveyed required information to prospective HECM borrowers but had not tested the effectiveness of these controls and lacked procedures to help ensure that records of counseling sessions were accurate. Because of these weaknesses, some prospective borrowers may not have received all of the information necessary to make informed decisions about obtaining a HECM. We recommended specific changes that HUD should make to improve the effectiveness of the agency’s internal controls so that they provided reasonable assurance of compliance with HECM counseling requirements. Since our report, HUD has implemented our recommendations by creating additional internal controls and guidance for counselors on how to comply with program responsibilities, as well as developing a “mystery shopping” initiative to better evaluate compliance among HECM counselors. In 2009, we also evaluated Treasury’s implementation of the foreclosure mitigation counseling requirements of HAMP. We found that Treasury did not plan to systematically track borrowers with high debt burdens, who were required to obtain foreclosure mitigation counseling, to determine whether they actually received counseling or if it was effective. Treasury officials told us that they made this decision because they did not want to deny a loan modification to borrowers who successfully made modified payments during a 90-day trial period but did not obtain counseling. Treasury also did not want to delay modifications under the program until servicers had arranged to coordinate with counselors to track whether borrowers obtained counseling. We noted that without knowing whether borrowers who were required to obtain counseling actually did so or evaluating the performance of counseled and noncounseled borrowers, Treasury would not know whether the requirement was meeting its purpose of reducing redefaults among borrowers with high debt burdens. We recommended in 2009 that Treasury consider methods of monitoring whether borrowers required to receive housing counseling as part of HAMP modifications did receive it and seek to determine whether the counseling did limit redefaults. Treasury staff said in 2010 that they had considered options for monitoring the proportion of borrowers that obtained counseling but had determined that implementing a monitoring process would be too burdensome for Treasury and mortgage servicers. Additionally, Treasury officials said they had no plans to assess the effectiveness of counseling in limiting redefaults, in part because they believed that the benefits of counseling on the performance of borrowers with high debt burdens were well documented. We continue to believe that monitoring the extent to which borrowers receive counseling and the redefault rates for counseled and noncounseled borrowers would provide valuable information about whether the counseling requirement is having its intended effect. To enhance consumer protections for homebuyers and tenants, the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd- Frank Act) requires HUD to establish an Office of Housing Counseling. This office is mandated to perform a number of functions related to homeownership and rental housing counseling, including establishing housing counseling requirements, standards, and performance measures; certifying individual housing counselors; conducting housing counseling research; and performing public outreach. The office is also mandated to continue HUD’s role in providing financial assistance to HUD-approved counseling agencies in order to encourage successful counseling programs and help ensure that counseling is available in underserved areas. Currently, HUD’s housing counseling program operates out of the Program Support Division within the Office of Single-Family Housing. HUD supports housing counseling through the division in two ways. First, it approves and monitors housing counseling agencies that meet HUD criteria and makes information about these agencies available to consumers on HUD’s website. According to HUD officials, as of August 2011, about 2,700 counseling agencies were HUD-approved. Second, HUD annually awards competitive grants to approved agencies to help them carry out their counseling efforts. HUD’s housing counseling program provides funding for the full spectrum of housing counseling, including prepurchase counseling, foreclosure mitigation counseling, rental housing counseling, reverse mortgage counseling for seniors, and homeless assistance counseling. HUD-approved agencies report to HUD on the number and type of service interactions (e.g., counseling sessions) they have with clients. Self-reported data on homeownership counseling conducted by these agencies indicate that service interactions for foreclosure mitigation counseling rose from about 171,000 in 2006 to more than 1.4 million in 2010, while service interactions for prepurchase counseling declined from about 372,000 to about 245,000 over the same period. Besides these two main functions, the Program Support Division and other HUD staff perform other counseling-related activities, some of which are similar to the functions the Dodd-Frank Act requires of the new counseling office. For example, HUD has developed standards and protocols for reverse mortgage counseling, certifies individual reverse mortgage counselors, is conducting research on the impact of homeownership counseling, and recently launched a public awareness campaign on loan modification scams. A working group within HUD is in the process of developing a plan for the new counseling office. According to HUD officials, the primary change needed to create the new office is the reassignment of staff who spend time on housing counseling activities but also have other responsibilities. In July 2011, we reported that HUD expected the new office to consist of approximately 160 full-time staff members, but HUD has indicated more recently that the office may be considerably smaller. In order to move forward with the establishment of the office and the appointment of a Director of Housing Counseling, HUD must submit a reorganization plan to Congress. According to a HUD official, HUD is still developing its proposal for the new counseling office and is unable to estimate when it will be submitted to Congress. HUD officials told us that the new counseling office would have advantages over their current organizational structure. They indicated that having dedicated resources, staff, and leadership would raise the profile of the housing counseling function and help the agency build a more robust capacity in this area. One official noted that getting sufficient information technology resources for housing counseling had been difficult and said that a separate counseling office might be able to compete more effectively with other parts of the agency for these resources. HUD officials also indicated that the new office would be organized to help the agency better anticipate and respond to changing counseling needs and improve interaction with counseling industry stakeholders. For example, the officials said that the new office would be organized around functional areas such as policy, training, and oversight, making it easier for industry stakeholders to direct their questions or concerns to the appropriate HUD staff. Additionally, HUD officials told us that the office would work with the Bureau of Consumer Financial Protection’s Office of Financial Education to coordinate the housing counseling activities of both organizations. Mortgage industry participants, consumer groups, and housing researchers we spoke with were supportive of the new housing counseling office and believed that it offered opportunities to enhance HUD’s role in the housing counseling arena. For example, some of the consumer groups stated that the office could help standardize counseling practices and publicize best practices, further elevating and professionalizing the counseling industry. In addition, representatives from several of the consumer groups and researchers with whom we met stated that the office could help enhance coordination among counseling agencies by providing opportunities for improved training, networking, and communication. Furthermore, they said that the office could potentially support improved data collection for research on the impact of housing counseling. Budget constraints could affect the establishment of the new counseling office and reduce the scale of HUD’s housing counseling activities. Although the Dodd-Frank Act authorized $45 million per year through 2012 for the operations of the new office, HUD had not received any appropriations for this purpose as of August 2011. In addition, appropriations for fiscal year 2011 eliminated HUD’s housing counseling assistance funds, which are primarily grant funds for approved counseling agencies. HUD officials said they planned to award and obligate about $10 million in unspent fiscal year 2010 counseling assistance funds in the 2011 fiscal year. However, the officials said that some counseling agencies had already reduced the level of services they provided due to the elimination of the fiscal year 2011 funds. Housing counseling groups we spoke with said that the cuts in HUD funding, which they use to leverage private funds, ultimately could result in fewer counseling services for prospective and existing homeowners unless private funds make up the difference. Chairman Biggert, Ranking Member Gutierrez, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to answer any questions you may have at this time. For further information on this testimony, please contact me at (202) 512- 8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony include Steve Westley, Assistant Director; Randall Fasnacht; Alise Nacson; and Emily Chalmers. Financial Education and Counseling Program. GAO-11-737R. Washington, D.C.: July 27, 2011. Mortgage Reform: Potential Impacts of Provisions in the Dodd-Frank Act on Homebuyers and the Mortgage Market. GAO-11-656. Washington, D.C.: July 19, 2011. Troubled Asset Relief Program: Treasury Actions Needed to Make the Home Affordable Modification Program More Transparent and Accountable. GAO-09-837. Washington, D.C.: July 23, 2009. Reverse Mortgages: Product Complexity and Consumer Protection Issues Underscore Need for Improved Controls over Counseling for Borrowers. GAO-09-606. Washington, D.C.: June 29, 2009. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Homeownership counseling can help consumers learn about buying a home and give them tools to deal with setbacks that could keep them from making timely mortgage payments. The Department of Housing and Urban Development (HUD) approves and provides grants to housing counseling agencies and has also implemented a requirement that borrowers seeking federally insured reverse mortgages through the Home Equity Conversion Mortgage (HECM) program receive counseling before taking out a HECM. The U.S. Department of the Treasury (Treasury) has also implemented a counseling requirement as part of its mortgage modification efforts under the Home Affordable Modification Program (HAMP). This statement discusses (1) what research suggests about the effectiveness of homeownership counseling and the challenges of conducting such research, (2) shortcomings that prior GAO work found in federal agencies' implementation of homeownership counseling requirements, and (3) the status of efforts to establish an Office of Housing Counseling within HUD. In preparing this statement, GAO relied on its past work on homeownership counseling, including a review of research and interviews with federal agency staff on implementing and evaluating counseling programs. The body of literature on homeownership counseling does not provide conclusive findings on the impact of all types of counseling. Recent research on foreclosure mitigation counseling--which helps financially distressed homeowners who are delinquent on payments--suggests that it can help homeowners avoid foreclosure and prevent them from lapsing back into default. Findings on prepurchase counseling--which helps potential homebuyers learn about buying a home and explains the financial responsibilities of homeownership--are less clear. One study concluded that such counseling lowered the default rate for new homeowners, but other studies showed no effect. Efforts to measure the impact of homeownership counseling have been hampered by a lack of data, as well as by challenges in designing studies and creating effective performance measures. Further studies are under way at HUD and Fannie Mae that are designed to overcome some of these limitations. Prior GAO work identified shortcomings in the implementation of homeownership counseling requirements for two federal programs. A 2009 study of the HECM program found that HUD's internal controls did not ensure that counselors were complying with program requirements. HUD later made improvements to the HECM program to address GAO's recommendations. Another GAO study from 2009 found that Treasury did not effectively track whether borrowers required to seek counseling under HAMP actually received it or whether counseling reduced the rate of redefaults. Treasury officials said that they had not implemented a monitoring process because it would be too burdensome for Treasury and mortgage servicers. They also did not plan to assess the effectiveness of counseling in limiting redefaults, in part because they believed that the benefits of counseling on the performance of borrowers with high debt burdens were well documented. GAO continues to believe that monitoring and assessment would provide valuable information on whether the counseling requirement is having its intended effect. HUD is establishing a new Office of Housing Counseling, as required by the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd- Frank Act). According to HUD, the agency is developing a reorganization plan but is unable to estimate when it will be submitted to Congress. Budget constraints could affect the new counseling office. Although the Dodd-Frank Act authorized $45 million per year through fiscal year 2012 for the operations of the new office, HUD has not received appropriations for this purpose. In addition, appropriations for fiscal year 2011 eliminated HUD's housing counseling assistance funds, which are primarily grant funds for approved counseling agencies. GAO has made recommendations to HUD and Treasury to improve oversight and evaluation of their homeownership counseling requirements. HUD implemented the recommendations, while Treasury said that implementation would be too burdensome.
An annuity is an insurance agreement or contract that comes in a number of different forms and can help individuals accumulate money for retirement through tax-deferred savings, provide them with monthly income that can be guaranteed to last for as long as they live, or both. A variable annuity is an insurance contract in which a consumer makes payments that are held in a separate account of the insurer. While the insurance company is the owner of the separate account assets, the assets are held for the benefit of consumers. In return, the insurer agrees to make periodic payments beginning immediately or at some future date. The purchaser’s payments can be directed into a range of investment options, typically mutual funds, which the insurer makes available in a separate account for the benefit of consumers. Purchasers may withdraw assets from their contracts at any time prior to annuitizing—that is, to convert the account into some form of lifetime payments. VA/GLWBs protect consumers against outliving their retirement assets and the effects that market losses on those assets can have on lifetime income by allowing them to withdraw a certain percentage each year until death. If the market performs well, the consumer may receive larger withdrawals, but if the market performs poorly, the consumer still receives the set withdrawal amount. VA/GLWB sales have grown in recent years. Data from LIMRA show that from 2008 to 2011, the number of VA/GLWB contracts in force rose from 1.5 million to 2.8 million and that average annual sales were around $58 billion. During that same period, total VA/GLWB assets held in insurers’ accounts increased from $133 billion to $323 billion. According to the Insured Retirement Institute, states with the highest sales of variable annuities in 2010 were California, New York, Florida, Texas, Pennsylvania, and New Jersey. In addition, LIMRA data on the demographics of VA/GLWB consumers show that in 2010 the average age of the consumer purchasing a VA/GLWB was 61 and the average age at first withdrawal was 68. Typically, the average annual withdrawal has been around $5,500. Also according to LIMRA, the average amount of a VA/GLWB contract sale from 2007 through 2010 was around $106,000. Like a GLWB rider, a CDA is an insurance contract that provides guaranteed lifetime income payments if a consumer’s investment account is exhausted, whether through withdrawals or poor market performance. In this case, the investment account contains the “covered assets”— typically mutual funds or managed accounts. However, the insurance company does not have ownership of the assets underlying the CDA, which are typically held in brokerage or investment advisory accounts owned by the CDA purchaser. Similar to a VA/GWLB, a CDA contract defines how much a consumer is able to withdraw—for example, 5 percent of the benefit base annually. Even if the value of the covered assets drops to zero, the insurance company has guaranteed the 5 percent withdrawal benefit (based on the benefit base value when lifetime withdrawals began) and continues making the annual payments to the consumer. Whether or not a policyholder receives payment from the insurance company selling the CDA is contingent upon the covered assets dropping to zero. On the basis of the products we reviewed, fees on CDAs are calculated as a set percentage of the investment assets or benefit base per year. These fees do not include any fees the consumer might pay related to the underlying investment that is covered by the CDA. Such investment management fees are paid to the investment company, not the insurer. State insurance regulators are responsible for overseeing insurance products, while SEC and FINRA are responsible for the oversight of securities. Federal and state regulators consider variable annuities to be both insurance and securities products, and GLWB riders that are attached to variable annuities to be an additional insurance benefit. The National Association of Insurance Commissioners committee responsible for life insurance and annuities products has determined CDAs to be life insurance products subject to state law and regulation for annuities. According to SEC officials, existing CDAs have been registered as securities with SEC, and therefore are covered by both federal securities laws and regulations, and state insurance regulations. Insurance is unique among financial services in the United States in that it is largely regulated by the states. State insurance regulators are responsible for enforcing state insurance laws and regulations, including those covering the licensing of agents, reviewing insurance products (including variable annuities) and their rates, and examining insurers’ financial solvency and market conduct. State regulators typically perform financial solvency examinations every 3 to 5 years, and they generally undertake market conduct examinations in response to specific consumer complaints or regulatory concerns. State regulators also monitor the resolution of consumer complaints against insurers. State insurance laws focus on solvency, market regulation, and consumer protection. In addition to state insurance regulators, NAIC—a voluntary association of the heads of insurance departments from the 50 states, the District of Columbia, and five U.S. territories—plays a role in insurance regulation. While NAIC is not a regulator, it provides guidance and services designed to more efficiently coordinate interactions between insurers and state regulators. These services include providing detailed insurance data to help regulators understand insurance sales and practices; maintaining a range of databases useful to regulators; and coordinating regulatory efforts by providing guidance, model laws and regulations, and information-sharing tools. Federal securities laws and SEC rules govern the securities industry in the United States. SEC’s mission is to protect investors; maintain fair, orderly, and efficient markets; and facilitate capital formation. SEC oversees key participants in the securities markets, including securities exchanges, securities brokers and dealers, investment advisers, and mutual funds. The Securities Act of 1933 (1933 Act) regulates public offerings of securities, requiring that issuers register securities with SEC and provide certain disclosures, including a prospectus, to investors at the time of sale. Investors may rely on broker-dealers and investment advisers for information or advice about securities, including insurance products such as VA/GLWBs and CDAs. Money managers, investment counselors, and financial planners who, for compensation, engage in the business of providing advice to others about securities, including asset allocation advice, are subject to the antifraud provisions of the Investment Advisers Act of 1940. Large investment advisers, those with $100 million or more of assets under management, generally are subject to SEC registration and regulation under the Investment Advisers Act and accompanying rules. Investment advisers with assets under management of less than $100 million generally are regulated by the states. Securities, including annuities that are securities, are subject to registration and disclosure requirements under the 1933 Act, and large investment advisers providing advice about securities must be registered with SEC. Broker-dealers that are engaged in the business of buying and selling securities generally are subject to broker-dealer regulation at the federal and state levels. FINRA is the largest regulator of securities firms doing business with the public in the United States. All registered securities broker-dealers who do business with the public must be members of FINRA and their personnel must be licensed with FINRA. FINRA oversees almost 4,500 brokerage firms and approximately 630,000 registered securities representatives. VA/GLWBs and CDAs share a number of features, but they also have some important structural differences. For example, both provide consumers with access to investment assets and the guarantee of lifetime income, but while VA/GLWB assets are held in a separate account of the insurer for the benefit of the annuity purchaser, the assets covered by a CDA are generally held in an investment account owned by the CDA purchaser. In part because of their shared features, these products can provide similar benefits to consumers. Yet as complex instruments that require consumers to make multiple important decisions, they also present certain risks to consumers. VA/GLWBs and CDAs may also involve risks for insurers, who must manage these risks in order to make promised payments to consumers. VA/GLWBs and CDAs share a number of product features. In general, both allow consumers to take lifetime withdrawals from their assets at a rate that the insurance company guarantees even if such withdrawals and investment losses deplete the consumer’s assets. VA/GLWBs and CDAs generally have three distinct phases of ownership: an accumulation phase, a withdrawal phase, and an insured phase (see fig. 1). The accumulation phase begins when a consumer purchases a VA/GLWB or CDA contract.with a GLWB rider or the value of assets covered by a CDA contract establishes the initial withdrawal value, or benefit base, on which the The initial premium paid under a VA contract amount of lifetime withdrawals are based. This benefit base is not a cash value that can be withdrawn, but rather is the amount to which lifetime withdrawal percentages will be applied during the withdrawal phase. The investment account value, on the other hand, represents the total value of the consumer’s investments, which is increased by investment gains and decreased by fees, withdrawals, and any investment losses. During this phase the consumer decides how to allocate investment assets among various options, including funds made available by an insurance company for investment under its VA/GLWB products and funds that an insurance company has agreed to cover under its CDA products. Also during this phase, the insurance company monitors each consumer’s account value and automatically adjusts the benefit base periodically should investment gains increase the value of this account. This feature, which exists for the VA/GLWBs and CDAs we reviewed, is referred to as a step-up or ratchet. Once a consumer’s benefit base is stepped up, it cannot later decline because of investment losses that reduce the consumer’s investment account value. Some VA/GLWBs have an additional feature that guarantees that, no matter how the investments perform, the benefit base will grow each year by a set percentage. This guaranteed percentage, alternatively referred to as a roll-up rate, growth credit, or bonus, is added to the benefit base, as adjusted for any prior step-ups. For example, some of the VA/GLWBs we reviewed included annual roll-up rates that ranged from 5 to 7 percent. The amount of the roll-up rate and the frequency, duration, and manner of calculation can affect the value of a consumer’s benefit base and thus the amount that can be taken as lifetime withdrawals. While such features can increase a consumer’s benefit base on which lifetime withdrawals are determined, annuities with higher roll-up rates can have higher fees. The withdrawal phase starts when a consumer begins taking annual lifetime withdrawals. The maximum amount of lifetime withdrawals that a consumer can take each year (without incurring a reduction in their benefit base) is calculated as a percentage of the consumer’s benefit base at the time of the first lifetime withdrawal. The VA/GLWBs and CDAs we reviewed had withdrawal percentages for a single life that ranged from 3 percent to over 8 percent of the benefit base, and the percentages were generally lower for younger consumers and higher for older consumers. Some products also have a joint life option where, if the consumer dies, the guarantee of lifetime income passes to their spouse. When such an option is elected, withdrawal amounts are typically one-half a percent less than they would be otherwise. Step-ups, like those during the accumulation phase, can increase the benefit base after lifetime withdrawals have begun, increasing the amount of annual lifetime withdrawals. Typically, the insurance company will increase the benefit base if a consumer’s account value, net of withdrawals and fees, has increased above this amount. After a step-up during the withdrawal phase, the base and the lifetime maximum withdrawals cannot decline because of investment losses that reduce the consumer’s account value. Step-ups that occur after lifetime withdrawals begin allow consumers to benefit from investment gains that can offset the effects of inflation. A consumer enters the insured phase only if their investment account value has been reduced to zero as a result of lifetime withdrawals; investment losses; or any expenses, fees, or other charges. In such cases, the consumer’s benefit base on which lifetime withdrawals are determined (the amount to which the withdrawal percentage is applied) remains unchanged but the consumer’s investment account value, which was the source for the funds previously withdrawn, is zero. Consequently, the funds needed to continue paying the same level of benefits to the consumer (and spouse, if a joint life option is elected) would then come from the insurance company’s own assets, and consumers receive payments from the insurance company that are equal to their prior lifetime withdrawal amount. Once the insurance company begins paying the agreed-upon withdrawal payment, the fees that the consumer had been paying for that protection would cease, as would any investment management and other fees paid for other benefits. Once the insured phase begins, all rights and benefits under a VA/GLWB or CDA contract, In addition, all except those related to continuing benefits, terminate. lifetime withdrawal benefits will continue to be paid to the consumer on the established schedule and generally cannot be changed. For example, the right to make additional contributions and receive the benefit of future step-ups in the withdrawal amount would no longer be available to an owner once the account is depleted. a separate account of the insurance company. The mutual funds may be managed by outside advisers, but the insurance company holds these assets for the benefit of consumers. By comparison, the investment assets covered by a CDA are held by the consumer in his or her own brokerage or investment advisory account and invested in investment funds, such as mutual funds, that the insurance company has agreed to cover under a CDA. The investment assets are not owned—nominally or otherwise—by the insurance company issuing the guarantee. Like the investment options available under a VA/GLWB, the investment assets covered by a CDA may or may not be managed by the insurance company or an affiliate. In this way, consumers can accumulate retirement assets in a personal account, such as an individual retirement account (IRA), and obtain lifetime withdrawal guarantees without having to transfer those assets to an insurance company. Further, the base variable annuity contracts to which GLWB riders are attached provide additional benefits not available under CDAs. For example, a VA/GLWB contract permits the consumer to annuitize an account balance in the future and receive benefit payments from the insurance company for life at rates set forth in the annuity contract. Also, with a VA/GLWB, if a consumer dies before annuitizing the account balance or taking lifetime withdrawals, the death benefit payable to beneficiaries is typically the greater of the sum of premiums paid or the investment account value. The additional VA/GLWB benefits come with additional costs, however. For example, in addition to the guarantee fee, variable annuity contracts with death benefits entail additional fees to cover the cost to the insurer. For the CDAs we reviewed, the assets covered by the CDA could not be annuitized in the future. The assets would first have to be sold and the proceeds used to purchase a separate annuity. In addition, unlike a variable annuity contract, a CDA does not have a death benefit. Similar to other annuity products, VA/GLWBs and CDAs can provide consumers with the benefit of a guaranteed stream of income for life. That is, for a fee, they can ensure that consumers receive a minimum annual payment until they die, regardless of how long they live or how their investment assets might perform. These products may also lock investment gains into the benefit base on which future lifetime withdrawals are determined and, in the case of variable annuities, also offer other potentially beneficial features such as death benefits, which pass on certain guaranteed amounts to a spouse or other beneficiaries. Further, VA/GLWBs and CDAs typically offer consumers the ability to invest their assets in a variety of investment funds. A unique benefit of these products is that they allow consumers to receive income guarantees while still maintaining ownership of and access to their funds during the accumulation and withdrawal phases. With traditional annuity products, in order to receive lifetime income consumers must transfer assets to the insurer, who holds them in their own general account and uses them to fund an annuity (they are annuitized). Consequently, once these assets are annuitized the consumer does not have access to these assets. With VA/GLWBs and CDAs, however, the consumers’ assets are not annuitized. As a result, consumers can withdraw any or all of their funds at any time. This can benefit consumers should they need funds for unexpected uses, such as medical or other expenses. One benefit specific to CDAs is that the guarantee of lifetime withdrawals can, in certain cases, be applied to existing investment assets. That is, consumers who have existing investment assets may be able to purchase a CDA to cover those assets, if an insurer agrees to cover those assets under a CDA. If the same consumers wanted to use those assets to obtain a guaranteed stream of income through a traditional annuity, they would have to first sell the investment assets and then use the proceeds to purchase the annuity. Several insurers and regulators we spoke with said that VA/GLWBs and CDAs are complex products, and emphasized the importance of obtaining professional financial advice before purchasing these products and making key decisions. Consumers that purchase VA/GLWBs and CDAs can face risks similar to those they may face with the purchase of other financial products. These risks include purchasing an unsuitable product, paying too much, making withdrawal decisions that decrease benefits, and having an insurer become insolvent before benefits are received. First, consumers face the risk of purchasing an unsuitable product if they do not understand how a particular product functions and meets their own needs, including how much might be appropriate to invest in a particular product. Insurers with whom we spoke said that VA/GLWBs and CDAs are generally attractive to middle-income consumers who want more control and flexibility over the investments they are relying on to provide retirement income. In addition, one insurer with whom we spoke said that consumers investing in such products have generally had around $500,000 in retirement savings and invested around 20 percent of that amount. Second, consumers may face the risk of being unable to determine if they are obtaining the best price for similar benefits provided by different insurers. Several insurers and regulators told us that because of the uniqueness of VA/GLWB and CDA products, it would be difficult to compare one insurer’s product to that of another insurer. For example, products can function in slightly different ways, have different combinations of features, and charge different amounts for the guarantees. As a result, consumers would find it difficult to take a price quoted to them from one insurer for a specific product with specific features, then compare that to a product from another insurer to determine if they could receive similar benefits at a lower price. Third, VA/GLWBs and CDAs pose a risk that certain decisions by consumers in withdrawing their assets, such as the timing and size of the withdrawals, could affect them negatively. For example, these decisions can reduce or even eliminate guaranteed benefits and result in additional fees that further reduce their assets. The investment assets underlying VA/GLWB and CDA guarantees are typically available for withdrawal before guaranteed lifetime withdrawals begin, but taking withdrawals too early or above certain thresholds can result in financial penalties and deplete assets, as the following examples illustrate, Variable annuities are frequently sold without any up front sales charges but impose contingent deferred sales charges, or “surrender” charges, on withdrawals taken during the early years of the contract (for example, the first 7 years) that are above a “free” withdrawal limit. Surrender charges on variable annuities are expressed as a series of percentages that decline over time and are applied to annual withdrawals that are typically greater than either accumulated earnings or 10 percent of premiums paid. Because CDAs split insurance and investment elements of the guarantee, there are no surrender charges on the CDA contract. However, a surrender charge may apply when a consumer sells shares of mutual funds covered by the CDA contract. In addition to sales charges, income taxes and an additional 10 percent penalty tax on withdrawals taken before age 59½ may apply. With some VA/GLWB products, taking withdrawals of any amount after reaching a minimum age can trigger the beginning of lifetime withdrawals (at a specified percentage) and stop all future roll-ups of the benefit base. In addition, a consumer who takes withdrawals sooner than initially planned can experience a permanent reduction of the lifetime withdrawal amount. For example, for one of the products we reviewed a withdrawal at age 64 triggers the establishment of a 4 percent lifetime withdrawal percentage, but waiting for a year would raise the percentage to 5 percent. Further, starting lifetime withdrawals during a downturn in the market can have a negative effect on lifetime income because the investment account balance will be that much lower than the benefit base, making the possibility of a step-up, and a possible increase in the withdrawal percentage and amount, less likely. The age at which lifetime withdrawals begin can have an impact on the value of other benefits and guarantees. For example, withdrawals and the related charges can reduce the cash value and death benefits under a variable annuity contract, leaving fewer assets available for surviving spouses and other beneficiaries. On the other hand, waiting too long to withdraw benefits can result in not living long enough to benefit from the product’s guarantees or receiving enough income to offset the amount of fees paid. For example, a person who first purchases a VA/GLWB or CDA at age 60 and waits until age 85 to begin taking lifetime withdrawals will be able to take a higher amount, but likely for a shorter period of time. Further, the purchaser will have paid fees for 25 years to protect against outliving his assets, a possibility that becomes less likely over time. In addition to deciding when to begin taking lifetime withdrawals, consumers need to decide how much to withdraw and need to be aware of certain product features when doing so. For example, lifetime withdrawals above the maximum annual amount specified in the withdrawal guarantee are permitted and are referred to as “excess withdrawals.” However, such withdrawals typically reduce the benefit base to which withdrawal percentages are applied, thus reducing future annual withdrawal amounts. Further, guarantee fees and other charges are typically not counted as withdrawals for purposes of calculating the maximum for a given year, but some VA/GLWB and CDA products we reviewed counted certain fees and charges or those above a certain threshold toward the annual lifetime withdrawal maximum. Finally, as with the purchase of any long-term insurance product, consumers can face some level of risk that by the time the consumer needs the benefit promised by the insurer, the insurer may not be able to provide it. However, insurers and insurance regulators take a number of steps to ensure that insurers remain financially solvent. In addition, in the event of an insurer’s insolvency, state insurance guaranty funds can help pay consumers what was promised by insurers, up to any coverage limits of those guaranty funds. According to officials from the National Organization for Life and Health Guaranty Associations (NOLHGA), while promises made under GLWBs, which are distinct from the investment portion of a variable annuity, are generally covered by state guaranty funds, such certainty does not exist with respect to CDAs. As a result, a risk exists that if an insurer who sold a CDA became insolvent, consumers owning those CDAs might not collect any promised benefits. As with the sale of life insurance products in general, insurers must manage the financial risks associated with VA/GLWBs and CDAs in order to ensure their ability to make promised payments and, depending on the amount of such products they sell, their financial solvency. Risks to insurers can arise when investment returns, interest rates, consumer longevity, and consumer behavior are different from what they expected. While a number of insurers and NAIC officials said that VA/GLWBs and CDAs do not pose undue risk to insurers, as we noted, at least one major insurer has decided not to sell CDAs because of the potential risk involved, and another insurer told us that they do not sell VA/GLWBs because they do not fit with the company’s risk profile. Insurance company officials with whom we spoke told us that in designing their products they consider not only the features that will help to meet consumer needs, but also the company’s own appetite for certain risks, the methods for managing those risks, and the price charged for the products’ guarantees. According to insurers with whom we spoke, ways in which insurers can use product design to help manage their risk can include the following. Establishing the minimum age at which consumers can begin taking withdrawals and the extent to which consumers can benefit from growth in the investments protected by the guarantee. Specifying which investments consumers may have covered by the products’ guarantees. For example, one CDA product we reviewed limited a consumer’s investments in equity funds to no more than 80 percent of the consumer’s total investment account value. Another insurer that sells CDAs told us that they were only willing to cover index funds for major, highly traded indices, such as the Standard and Poor’s 500. Determining the formulas used to rebalance consumers’ investments into and out of fixed-income funds to mitigate some of the financial risks associated with providing lifetime withdrawal guarantees. Prospectuses for these VA/GLWB products disclose that the automatic rebalancing feature may limit the consumer’s participation in future market gains and, therefore, the potential for future increases in their annual lifetime income. Insurers will also price their products to ensure they have sufficient revenue and capital to pay for the expenses they expect to incur related to the products they sell. That is, they will charge more for products that they deem to be more risky or on which they expect to incur greater costs. With respect to VA/GLWBs and CDAs, for example, insurers may charge higher fees for products with features that can result in a higher guaranteed benefit base. Insurers also use hedging programs to help further manage the investment risks of the assets underlying the VA/GLWB and CDA. Hedging involves buying financial instruments, such as options, to offset the potential loss on an investment. To the extent that product design and hedging is not sufficient, insurers can also manage financial risks by modifying their products after they have been purchased by consumers to the extent permitted by their contracts, their prior disclosure, and applicable law. For the VA/GLWB and CDA products we reviewed, insurers sometimes reserve certain rights, such as the right to determine which investment funds will be covered by a GLWB rider or CDA guarantee and the conditions surrounding the allocation of a consumer’s investment assets, change the frequency and amount of a guarantee fee, or reject additional contributions or transfers. Some insurers have recently raised fees on VA/GLWB guarantees or stopped accepting additional contributions to existing contracts in response to changing market conditions. Insurers generally cannot change the roll-up, step-up, or withdrawal rates for existing contract holders, but can make these changes prospectively for new customers. Federal disclosure and suitability regulations apply to the offer, recommendation, and sale of securities products such as variable annuities, including variable annuities with GLWB riders. While it has long been accepted that variable annuities constitute securities under the federal laws, because CDAs are a relatively new product, analysis under the federal securities law is less developed. However, SEC officials have said that CDAs currently being offered in the retail market are being registered as securities and are therefore covered by federal securities law and state insurance regulations. In addition, NAIC has developed annuity disclosure and suitability regulations for use at the state level, but state adoption and protection levels vary, so CDA consumers may not be uniformly protected. Further, questions about the adequacy of existing annuity regulation and the applicability of state insurance guaranty funds for CDA consumers if insurers become insolvent have prompted industry oversight reviews, which were ongoing as of October 2012. Federal disclosure and suitability regulations apply to the offer, recommendation, and sale of securities products such as variable annuities, including variable annuities with GLWB riders. These requirements aim to inform consumers about products and ensure that the products themselves are reasonably appropriate for consumers before they are purchased. As previously noted, while it has long been accepted that variable annuities constitute securities under federal law, CDAs are a relatively new product and analysis under the federal securities law is less developed. To our knowledge, no court has addressed the securities law treatment of CDAs, and SEC has not provided written guidance with respect to the status of CDAs under the federal securities laws. However, according to SEC officials, existing CDAs have been registered under the Securities Act of 1933, absent specific exemptions from registration such as the exemption for contracts sold to certain types of tax-qualified retirement plans, and are therefore covered by federal securities law and state insurance regulations. The 1933 Act generally requires issuers of securities that are offered to the public to register them with SEC and make certain disclosures through a prospectus that has been filed with SEC. The 1940 Act generally requires investment companies, including separate accounts that fund variable annuities, to be registered under the Act. Pursuant to these requirements, SEC has issued rules and standards for prospectuses offering registered variable annuities. The purpose of these requirements is to ensure that investors receive descriptive information on basic product features, fees, benefits, and risks that can help inform their investment decisions. Disclosures made through prospectuses filed with SEC under the federal securities laws are subject to the anti-fraud provisions of the 1933 Act, which prohibit material misrepresentations or omissions and provide a general anti-fraud remedy for purchasers and sellers of securities. SEC staff has recommended consideration of rulemakings that would apply a fiduciary standard to both broker-dealers and investment advisers no less stringent than that applied currently to investment advisers. The SEC staff also identified certain areas where laws and regulations that apply to broker-dealers and investment advisers differ, and recommended the Commission consider whether these areas should be harmonized for the benefit of retail investors. SEC Staff Study on Investment Advisers and Broker- Dealers As Required By Section 913 of the Dodd-Frank Wall Street Reform and Consumer Protection Act (2011), (Washington, D.C. January, 2011). FINRA Rule 2090, which requires broker-dealers to ask about and retain essential information on each customer (“Know Your Customer”) and concerning the authority of each person acting on behalf of the customer. FINRA Rule 2111, which requires broker-dealers to have a reasonable basis to believe that a recommended securities transaction or investment strategy is suitable for the customer based on information obtained through reasonable diligence of the broker- dealer or its associated person to ascertain the customer’s investment profile. This profile includes, but is not limited to the customer’s other investments, financial situation and needs, tax status, investment objectives, investment experience, investment time horizon, liquidity needs, risk tolerance, and any other information the customer may disclose to the broker-dealer or associated person in connection with the recommendation. National Association of Securities Dealers (NASD) Rule 2210, which regulates broker-dealers’ communications with the public and applies to, among other things, variable annuity advertisements. Broker- dealers must file retail communications concerning variable annuities with FINRA and respond to comments provided by FINRA staff on such communications. FINRA Rule 2330, specific to deferred variable annuities, which governs member broker-dealers’ compliance and supervisory responsibilities with respect to the recommendation of the initial purchase or exchange of deferred variable annuities, and initial subaccount allocations. These requirements apply to the recommendation and sale of variable annuity products, including those with guaranteed lifetime withdrawal benefits. According to SEC officials, issuers of CDAs have been registering offerings of their CDA products with SEC, and SEC is reviewing the disclosures associated with these products. While we did not observe and were not told of any such instances, if CDAs were sold without being registered as securities, SEC could take action if it determined that securities laws were being violated. If it were determined that federal securities laws did not apply to CDA offerings, relevant state regulations would still apply. In addition, FINRA officials said that they regulate member broker-dealers that offer variable annuity products, including variable annuities with GLWB riders. If a broker-dealer offers a CDA that is registered under the federal securities laws, FINRA also regulates the sale of such a CDA. If state insurance law issues arise in connection with a firm’s sale of a CDA, FINRA staff may refer these issues to relevant state insurance regulators. The NAIC committee responsible for life insurance and annuities issues considers CDAs to be insurance products and has developed annuity disclosure and suitability model regulations for use by state insurance regulators. NAIC has developed these model disclosure and suitability regulations for annuity products in collaboration with consumer advocates and industry experts. However, unlike federal standards that are consistently applied to variable annuities regardless of where consumers live, state adoption of these model regulations varies, so protections may be stronger in some states than in others. In 2011, NAIC developed a model disclosure regulation for annuity products that serves several functions for consumers and insurers, including: helping ensure that consumers understand annuity products by explaining basic features, benefits, and fees; suggesting that annuity products with guaranteed income or benefit provisions are intended to be longer-term investment products; providing guidance for insurers when they choose to develop product illustrations intended to help consumers better understand how a particular annuity product works. In particular, the guidance provides illustration formats and specifies what kinds of illustration disclosures must be made when an insurer chooses to develop them; and requiring that annuity customers be provided or referred to NAIC’s Annuity Buyer’s Guide that also contains general product information and provides answers to basic questions about risks and investing that consumers can use to decide whether these products are right for them. In addition to its model disclosure regulation, NAIC developed a model suitability regulation in 2003 to help ensure that insurers consider the financial needs and objectives of consumers and that these needs are appropriately addressed at the time annuity sales or exchanges occur. More specifically, the model suitability regulation requires insurers and insurance agents to inquire about consumers’ suitability information and ensure that they have reasonable grounds to believe that annuity products would benefit consumers before recommending purchases or exchanges. In 2006, NAIC revised the model regulation to expand its scope to consumers of all ages, and in 2010 the model regulation was again revised to further strengthen a number of annuity suitability protections. Among other changes, the 2010 revision requires insurers to be responsible for compliance with the model regulation whether or not they contract suitability functions out to a third party; maintain procedures for reviewing each investment recommendation made to consumers and helping ensure that the product is suitable for the particular purchaser and that the purchaser understands the annuity product recommended; ensure that insurance agents, or producers, complete general annuity training before selling annuity products; and provide product-specific annuity training and training materials to the agent or producer before an agent or producer can solicit the sale of a product. In addition to these changes, NAIC adopted the revised the model regulation to set standards and procedures for suitable annuity recommendations and to require insurers to establish a system to supervise recommendations so that the insurance needs and financial objectives of consumers are appropriately addressed. The revised NAIC model regulation includes a “safe harbor” provision that is intended to prevent duplicative suitability standards from being applied to sales of annuities through FINRA member broker-dealers. Annuity sales made in compliance with FINRA requirements are deemed to comply with the suitability requirements outlined in NAIC’s model regulation. Violations of state law developed from the model regulation can result in remedies for consumers and penalties for insurers and agents. Under the model regulation, state insurance commissioners can require reasonably appropriate corrective action for violations that have harmed consumers. NAIC information shows that, as of October 2012, state adoption of NAIC’s disclosure and suitability regulations varied significantly, meaning that consumers in some states may not be protected as well as those in other states. According to NAIC, many states have adopted some form of annuity disclosure regulation, although some states and the District of Columbia do not have disclosure protections in place. As of October 2012, officials are working to determine which states had adopted the revised model disclosure regulation. In terms of the suitability model regulation, 19 states plus the District of Columbia have adopted NAIC’s most recent model regulation that incorporates the added protections noted above, and another 29 states have adopted other suitability protections for annuity consumers. The remaining 2 states do not have suitability regulations in place. According to NAIC officials, model disclosure and suitability standards are not required as part of NAIC’s accreditation program. Figure 2 summarizes the state adoption of suitability regulations. Whether or not states have adopted NAIC disclosure and suitability regulation is important for CDA consumers. Unlike federal regulations that apply to the sale of VA/GLWBs nationwide, disclosure and suitability regulations for CDAs may depend on the actions of individual states and the extent to which they have implemented these protections. According to NAIC, regulatory action by states can time to occur and depends on legislative cycles and the political environment of states. As the information in figure 2 shows, consumers across states are subject to different suitability protection and in some cases to no protection at all. The different regulatory approaches among the sample of seven states we reviewed also show the variation in regulation of CDAs across states. Although most states in our sample have not specifically approved CDA sales, most have adopted some form of disclosure and suitability regulation for annuity products. Among these seven states, three have adopted NAIC’s model disclosure regulation and three have adopted NAIC’s most recent suitability regulation. For the states that allow CDA sales, the consumer protections found in the model regulations are critical. Of the two states that allow CDA sales—Iowa and Ohio—both have adopted NAIC’s model disclosure regulation, but only Iowa has adopted its most recent model suitability regulation. Ohio, according to NAIC information, has passed a previous version of NAIC’s model suitability regulation, and therefore has not adopted added protections found in the revised model regulation as outlined above. Consumers in states that have not adopted NAIC’s model regulations may not be benefitting from available disclosure and suitability protections. More specifically, states that have not adopted the most recent model suitability regulations may not be extending to consumers protections developed through NAIC’s 2010 suitability revision. Table 1 shows the variation in the disclosure and suitability protections across our seven sample states. Some industry participants suggest state insurance regulation and existing actuarial guidance may adequately address risks to insurers offering CDAs and to consumers. Others said that CDAs may pose solvency risks for both because insurers offer consumers an income guarantee but do not maintain the assets on which the guarantees are made. One major insurer has said that CDAs pose significant enough pricing and reserving challenges that it does not offer CDAs. In addition, two consumer advocates with whom we spoke highlighted the solvency risks CDAs pose for insurers. One advocate suggested that reasonable and appropriate insurer reserving and capital requirements do not currently exist for CDAs and that considerable NAIC and state regulatory work would be needed to develop them. The same advocate said selling CDA products before key issues concerning the regulatory framework are finalized might expose consumers to risks that might result from an insurer’s potential insolvency. The advocate concluded that the potential for insurers to increase the marketing and sale of CDAs, given the growing needs of retirees, makes having consumer and insurer protections in place important. Potential risks to CDA consumers and insurers have prompted industry oversight reviews of these products and their regulation. Although NAIC has determined that CDAs are a life insurance product, it is working with state regulators, insurers, and consumer advocates through its CDA Working Group, formed in March 2012, to build greater consensus around the classification of CDAs and to determine whether any adjustments to state regulation might be appropriate. In particular, NAIC is evaluating the adequacy of existing state annuity laws and regulations for CDA sales, including those on insurer solvency such as capital adequacy and reserve requirements. According to NAIC officials, both NAIC and state insurance regulators recognize the complexity of CDA products for consumers and are also working to revise disclosure and suitability practices where appropriate. Another industry review by the NOHLGA, NAIC, and state insurance regulators aims to address whether CDA consumers are protected by state insurance guaranty funds in the event of an insurer’s insolvency. According to NOLHGA officials, variable annuities are not covered by guaranty funds because they are indistinguishable from equity products. That is, they are not supported by assets in an insurer’s general account, but by specific assets in separate accounts dedicated to the particular fund or funds chosen for the variable annuity. However, the officials said that guaranteed lifetime withdrawal benefits, which are now part of most variable annuity contracts, are distinct from the equity portion of a variable annuity and are generally covered by state guaranty funds. The officials said that while CDAs would also appear to have an equity portion and a guarantee portion, they have a committee studying the extent to which CDAs might be covered by state guaranty funds. Officials noted that even when the committee has reached a determination on guaranty fund protections, each state will have to reach its own conclusion about whether their particular guaranty fund laws allow for CDA coverage. We provided a draft of the report to SEC and NAIC and relevant excerpts to FINRA. Each provided technical comments that were incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Chairman of the Securities and Exchange Commission, the Chief Executive Officer of the National Association of Insurance Commissioners, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov.If you or your staff have any questions about this report, please contact Alicia Puente Cackley (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To compare the features of variable annuities with guaranteed lifetime withdrawal benefits (VA/GLWB) and contingent deferred annuities (CDA), we analyzed specific insurance company products to obtain information on how the products function, including how investment gains and losses are treated, how withdrawal amounts are determined, and what happens when a consumer’s investment account is depleted. We also interviewed insurance company officials to verify our understanding of their products, and VA/GLWBs and CDAs in general. We judgmentally selected the companies based on criteria that included their market share of variable annuity sales and their decisions to sell or not to sell CDAs. To identify potential benefits and risks to consumers, we analyzed the product information we obtained and also interviewed insurers, consumer advocates, and state insurance regulators. To understand the potential risks these products pose to insurers and how they manage these risks, we also interviewed NAIC officials and reviewed information from insurers and stakeholder groups. We also obtained data from industry organizations on the sale of annuity products with guaranteed lifetime withdrawals. We discussed the sources and reliability of the data with officials from these organizations and found the data sufficiently reliable for the purposes of this report. To determine how VA/GLWBs and CDAs are regulated and the extent to which regulation addresses identified concerns, we identified regulations and processes used by federal and state regulators, as well as any proposed regulations, and compared them with the risks to consumers that we identified as part of the work under the previous objective. Our review of state regulation included model laws developed by the National Association of Insurance Commissioners (NAIC), specific state regulations, and Securities and Exchange Commission (SEC) and Financial Industry Regulatory Authority (FINRA) rules. We selected the sample of states for our analysis based on the volume of sales of VA/GLWBs and CDAs in the state, whether the state allowed the sale of CDAs, and the state’s population. We interviewed NAIC, state, SEC, and FINRA officials to determine how VA/GLWBs and CDAs are regulated. We also interviewed other stakeholder groups, such as consumer advocates and industry organizations, to gain their perspective on issues related to regulation of lifetime income products considered in our review. We conducted this performance audit from February 2012 to November 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Patrick A. Ward (Assistant Director), Emily R. Chalmers, Nima Patel Edwards, Scott E. McNulty, and Steve Ruszczyk made key contributions to this report. Also contributing to this report were Pamela Davidson, Marc Molino, Patricia Moye, and Frank Todisco.
As older Americans retire, they may face rising health care costs, inflation, and the risk of outliving their assets. Those entering retirement today typically face greater responsibility for managing their retirement savings than those who retired in the past. Lifetime income products can help older Americans ensure they have income throughout their retirement. VA/GLWBs and CDAs, two such products, may provide unique benefits to consumers. According to industry participants, while annuities with GLWBs have been sold for a number of years, CDAs are relatively new and are not widely available. GAO was asked to review issues relating to these financial products. This report (1) compares the features of VA/GLWBs and CDAs and examines potential benefits and risks to consumers and potential risks to insurers, and (2) examines the regulation of these products and the extent to which regulations address risks to consumers. GAO analyzed insurance company product information, proposed and final rules and regulations, and studies and data related to retirement and product sales. GAO also interviewed federal and state regulators and selected insurers, consumer advocates, and industry organizations. GAO provided a draft of this report to NAIC and SEC. Both provided technical comments, which have been addressed in the report, as appropriate. Annuities with guaranteed lifetime withdrawals can help older Americans ensure they do not outlive their assets, but do present some risks to consumers. Two such products, variable annuities with guaranteed lifetime withdrawal benefits (VA/GLWB) and contingent deferred annuities (CDA), share a number of features but have some important structural differences. For example, both provide consumers with access to investment assets and the guarantee of lifetime income, but while VA/GLWB assets are held in a separate account of the insurer for the benefit of the annuity purchaser, the assets covered by a CDA are generally held in an investment account owned by the CDA purchaser. Consumers can benefit from these products by having a steady stream of income regardless of how their investment assets perform or how long they live, while at the same time maintaining access to their assets for unexpected or other expenses. VA/GLWBs and CDAs are complex products that present some risks to consumers and require them to make multiple important decisions. For example, consumers might purchase an unsuitable product or make withdrawal decisions that could negatively affect their potential benefits. Several insurers and regulators GAO spoke to said it was important for consumers to obtain professional financial advice before purchasing these products and making key decisions. These products can also create risks for insurers which, if not addressed, could ultimately affect insurers' ability to provide promised benefits to consumers. VA/GLWBs are considered to be both securities and insurance products, and are therefore covered by both federal securities regulations and state insurance regulations. For CDAs, the National Association of Insurance Commissioners committee responsible for life insurance and annuities products has determined CDAs to be life insurance products subject to state law and regulation for annuities. According to SEC officials, existing CDAs have been registered as securities with SEC, and therefore are covered by both federal securities laws and regulations, and state insurance regulations. At the state level, NAIC has developed state disclosure and suitability regulations for annuity products. However, states differ on the extent to which they have adopted these annuity regulations, and some do not have protections at all. As a result, consumers in states that have adopted different regulations may benefit from different levels of protection. NAIC and state regulators told GAO that they are currently reviewing the regulations of CDAs. In March 2012, NAIC began reviewing existing annuity regulations to determine whether any changes are needed to address the unique product design features of CDAs, including potential modifications to annuity disclosure and suitability standards. It is also reviewing what kinds of capital and reserving requirements may be needed to help insurers manage product risk. In addition, NAIC and the National Organization of Life and Health Guaranty Associations are each working to determine whether state insurance guaranty funds, which protect consumers in the event insurers become insolvent, cover CDA products. Both agree that each state will have to reach its own conclusion about whether their particular state guaranty fund laws allow for CDA coverage. Until these regulatory issues are resolved, consumers may not be fully protected.
Today the Medicare program faces a long-range and fundamental financing problem driven by known demographic trends and projected escalation of health care spending beyond general inflation. The lack of an immediate crisis in Medicare financing affects the nature of the challenge, but it does not eliminate the need for change. Within the next 10 years, the first baby boomers will begin to retire, putting increasing pressure on the federal budget. From the perspectives of the program, the federal budget, and the economy, Medicare in its present form is not sustainable. Acting sooner rather than later would allow changes to be phased in so that the individuals who are most likely to be affected, namely younger and future workers, will have time to adjust their retirement planning while helping to avoid related “expectation gaps.” Since there is considerable confusion about Medicare’s current financing arrangements, I would like to begin by describing the nature, timing, and extent of the financing problem. As you know, Medicare consists of two parts—HI and SMI. HI, which pays for inpatient hospital stays, skilled nursing care, hospice, and certain home health services, is financed by a payroll tax. Like Social Security, HI has always been largely a pay-as-you-go system. SMI, which pays for physician and outpatient hospital services, diagnostic tests, and certain other medical services, is financed by a combination of general revenues and beneficiary premiums. Beneficiary premiums pay for about one-fourth of SMI benefits, with the remainder financed by general revenues. These complex financing arrangements mean that current workers’ taxes primarily pay for current retirees’ benefits except for those financed by SMI premiums. As a result, the relative numbers of workers and beneficiaries have a major impact on Medicare’s financing. The ratio, however, is changing. In the future, relatively fewer workers will be available to shoulder Medicare’s financial burden. In 2002 there were 4.9 working-age persons (18 to 64 years) per elderly person, but by 2030, this ratio is projected to decline to 2.8. For the HI portion of Medicare, in 2002 there were nearly 4 covered workers per HI beneficiary. Under their intermediate 2003 estimates, the Medicare Trustees project that by 2030 there will be only 2.4 covered workers per HI beneficiary. (See fig. 1.) The demographic challenge facing the system has several causes. People are retiring early and living longer. As the baby boom generation ages, the share of the population age 65 and over will escalate rapidly. A falling fertility rate is the other principal factor underlying the growth in the elderly’s share of the population. In the 1960s, the fertility rate was an average of 3 children per woman. Today it is a little over 2, and by 2030 it is expected to fall to 1.95—a rate that is below replacement. The combination of the aging of the baby boom generation, increased longevity, and a lower fertility rate will drive the elderly as a share of total population from today’s 12 percent to almost 20 percent in 2030. Taken together, these trends threaten both the financial solvency and fiscal sustainability of this important program. Labor force growth will continue to decline and by 2025 is expected to be less than a third of what it is today. (See fig. 2.) Relatively fewer workers will be available to produce the goods and services that all will consume. Without a major increase in productivity, low labor force growth will lead to slower growth in the economy and slower growth of federal revenues. This in turn will only accentuate the overall pressure on the federal budget. This slowing labor force growth is not always recognized as part of the Medicare debate, but it is expected to affect the ability of the federal budget and the economy to sustain Medicare’s projected spending in the coming years. The demographic trends I have described will affect both Medicare and Social Security, but Medicare presents a much greater, more complex, and more urgent challenge. Unlike Social Security, Medicare spending growth rates reflect not only a burgeoning beneficiary population, but also the escalation of health care costs at rates well exceeding general rates of inflation. The growth of medical technology has contributed to increases in the number and quality of health care services. Moreover, the actual costs of health care consumption are not transparent. Third-party payers largely insulate covered consumers from the cost of health care decisions. These factors and others contribute to making Medicare a greater and more complex fiscal challenge than Social Security. Current projections of future HI income and outlays illustrate the timing and severity of Medicare’s fiscal challenge. Today, the HI Trust Fund takes in more in taxes than it spends. Largely because of the known demographic trends I have described, this situation will change. Under the Trustees’ 2003 intermediate assumptions, program outlays are expected to begin to exceed program tax revenues in 2013. (See fig. 3.) To finance these cash deficits, HI will need to draw on the special-issue Treasury securities acquired during the years of cash surpluses. For HI to “redeem” its securities, the government will need to obtain cash through some combination of increased taxes, spending cuts, and/or increased borrowing from the public (or, if the unified budget is in surplus, less debt reduction than would otherwise have been the case). Neither the decline in the cash surpluses nor the cash deficits will affect the payment of benefits, but the negative cash flow will place increased pressure on the federal budget to raise the resources necessary to meet the program’s ongoing costs. This pressure will only increase when Social Security also experiences negative cash flow and joins HI as a net claimant on the rest of the budget. The gap between HI income and costs shows the severity of HI’s financing problem over the longer term. This gap can also be expressed relative to taxable payroll (the HI Trust Fund’s funding base) over a 75-year period. This year, under the Trustees’ 2003 intermediate estimates, the 75-year actuarial deficit is projected to be 2.40 percent of taxable payroll—a significant increase from last year’s projected deficit of 2.02 percent. This means that to bring the HI Trust Fund into balance over the 75-year period, either program outlays would have to be immediately reduced by 42 percent or program income immediately increased by 71 percent, or some combination of the two. These estimates of what it would take to achieve 75-year trust fund solvency understate the extent of the problem because the program’s financial imbalance gets worse in the 76th and subsequent years. As each year passes, we drop a positive year and add a much bigger deficit year. The projected exhaustion date of the HI Trust Fund is a commonly used indicator of HI’s financial condition. Under the Trustees’ 2003 intermediate estimates, the HI Trust Fund is projected to exhaust its assets in 2026. This solvency indicator provides information about HI’s financial condition, but it is not an adequate measure of Medicare’s sustainability for several reasons. In fact, the solvency measure can be misleading and can serve to give a false sense of security as to Medicare’s true financial condition. Specifically, HI Trust Fund balances do not provide meaningful information on the government’s fiscal capacity to pay benefits when program cash inflows fall below program outlays. As I have described, the government would need to come up with cash from other sources to pay for benefits once outlays exceeded program tax income. In addition, the HI Trust Fund measure provides no information on SMI. SMI’s expenditures, which currently account for about 43 percent of total Medicare spending, are projected to grow even faster than those of HI in the near future. Moreover, Medicare’s complex structure and financing arrangements mean that a shift of expenditures from HI to SMI can extend the solvency of the HI Trust Fund, creating the appearance of an improvement in the program’s financial condition. For example, the Balanced Budget Act of 1997 modified the home health benefit, which resulted in shifting a portion of home health spending from the HI Trust Fund to SMI. Although this shift extended HI Trust Fund solvency, it increased the draw on general revenues and beneficiary SMI premiums while generating little net savings. Ultimately, the critical question is not how much a trust fund has in assets, but whether the government as a whole and the economy can afford the promised benefits now and in the future and at what cost to other claims on available resources. To better monitor and communicate changes in future total program spending, new measures of Medicare’s sustainability are needed. As program changes are made, a continued need will exist for measures of program sustainability that can signal potential future fiscal imbalance. Such measures might include the percentage of program funding provided by general revenues, the percentage of total federal revenues or gross domestic product (GDP) devoted to Medicare, or program spending per enrollee. As such measures are developed, questions would need to be asked about actions to be taken if projections showed that program expenditures would exceed the chosen level. Taken together, Medicare’s HI and SMI expenditures are expected to increase dramatically, rising from about 12 percent of federal revenues in 2002 to more than one-quarter by midcentury. The budgetary challenge posed by the growth in Medicare becomes even more significant in combination with the expected growth in Medicaid and Social Security spending. As shown in figure 4, Medicare, Medicaid, and Social Security have already grown from 13 percent of federal spending in 1962 before Medicare and Medicaid were created to 42 percent in 2002. This growth in spending on federal entitlements for retirees will become increasingly unsustainable over the longer term, compounding an ongoing decline in budgetary flexibility. Over the past few decades, spending on mandatory programs has consumed an ever-increasing share of the federal budget. In 1962, prior to the creation of the Medicare and Medicaid programs, spending for mandatory programs plus net interest accounted for about 32 percent of total federal spending. By 2002, this share had almost doubled to approximately 63 percent of the budget. (See fig. 5.) In much of the past decade, reductions in defense spending helped accommodate the growth in these entitlement programs. However, even before the terrorist attacks of September 11, 2001, this ceased to be a viable option. Indeed, spending on defense and homeland security will grow as we seek to combat new threats to our nation’s security. GAO prepares long-term budget simulations that seek to illustrate the likely fiscal consequences of the coming demographic tidal wave and rising health care costs. These simulations continue to show that to move into the future with no changes in federal retirement and health programs is to envision a very different role for the federal government. Assuming, for example, that the tax reductions enacted in 2001 do not sunset and discretionary spending keeps pace with the economy, by midcentury federal revenues may not even be adequate to pay Social Security and interest on the federal debt. Spending for the current Medicare program— without any additional new benefits—is projected to account for more than one-quarter of all federal revenues. To obtain budget balance, massive spending cuts, tax increases, or some combination of the two would be necessary. (See fig. 6.) Neither slowing the growth of discretionary spending nor allowing the tax reductions to sunset eliminates the imbalance. In addition, while additional economic growth would help ease our burden, the projected fiscal gap is too great for us to grow our way out of the problem. Indeed, long-term budgetary flexibility is about more than Social Security and Medicare. While these programs dominate the long-term outlook, they are not the only federal programs or activities that bind the future. The federal government undertakes a wide range of programs, responsibilities, and activities that obligate it to future spending or create an expectation for spending. A recent GAO report describes the range and measurement of such fiscal exposures—from explicit liabilities such as environmental cleanup requirements to the more implicit obligations presented by life- cycle costs of capital acquisition or disaster assistance. Making government fit the challenges of the future will require not only dealing with the drivers—such as entitlements for the elderly—but also looking at the range of other federal activities. A fundamental review of what the federal government does and how it does it will be needed. This involves looking at the base of all major spending and tax policies to assess their appropriateness, priority, affordability, and sustainability in the years ahead. At the same time, it is important to look beyond the federal budget to the economy as a whole. Figure 7 shows the total future draw on the economy represented by Medicare, Medicaid, and Social Security. Under the 2003 Trustees’ intermediate estimates and the Congressional Budget Office’s (CBO) most recent long-term Medicaid estimates, spending for these entitlement programs combined will grow to 14 percent of GDP in 2030 from today’s 8.4 percent. Taken together, Social Security, Medicare, and Medicaid represent an unsustainable burden on future generations. Although real incomes are projected to continue to rise, they are expected to grow more slowly than has historically been the case. At the same time, the demographic trends and projected rates of growth in health care spending I have described will mean rapid growth in entitlement spending. Taken together, these projections raise serious questions about the capacity of the relatively smaller number of future workers to absorb the rapidly escalating costs of these programs. As HI trust fund assets are redeemed to pay Medicare benefits and SMI expenditures continue to grow, the program will constitute a claim on real resources in the future. As a result, taking action now to increase the future pool of resources is important. To echo Federal Reserve Chairman Alan Greenspan, the crucial issue of saving in our economy relates to our ability to build an adequate capital stock to produce enough goods and services in the future to accommodate both retirees and workers in the future. The most direct way the federal government can raise national saving is by increasing government saving; that is, as the economy returns to a higher growth path, a balanced fiscal policy that recognizes our long- term challenges can help provide a strong foundation for economic growth and can enhance our future budgetary flexibility. It is my hope that we will think about the unprecedented challenge facing future generations in our aging society. Putting Medicare on a sustainable path for the future would help fulfill this generation’s stewardship responsibility to succeeding generations. It would also help to preserve some capacity for future generations to make their own choices for what role they want the federal government to play. As with Social Security, both sustainability and solvency considerations drive us to address Medicare’s fiscal challenges sooner rather than later. HI Trust Fund exhaustion may be more than 20 years away, but the squeeze on the federal budget will begin as the baby boom generation begins to retire. This will begin as early as 2008, when the leading edge of the baby boom generation becomes eligible for early retirement. CBO’s current 10-year budget and economic outlook reflects this. CBO projects that economic growth will slow from an average of 3.2 percent a year from 2005 through 2008 to 2.7 percent from 2009 through 2013, reflecting slower labor force growth. At the same time, annual rates of growth in entitlement spending will begin to rise. Annual growth in Social Security outlays is projected to accelerate from 5.2 percent in 2007 to 6.6 percent in 2013. Annual growth in Medicare enrollees is expected to accelerate from 1.1 percent today to 2.9 percent in 2013. Acting sooner rather than later is essential to ease future fiscal pressures and also provide a more reasonable planning horizon for future retirees. We are now at a critical juncture. In less than a decade, the profound demographic shift that is a certainty will have begun. Despite a common awareness of Medicare’s current and future fiscal plight, pressure has been building to address recognized gaps in Medicare coverage, especially the lack of a prescription drug benefit and protection against financially devastating medical costs. Filling these gaps could add significant expenses to an already fiscally overburdened program. Under the Trustees’ 2003 intermediate assumptions, the present value of HI’s actuarial deficit is $6.2 trillion, a 20-percent increase from the prior year. This difficult situation argues for tackling the greatest needs first and for making any benefit additions part of a larger structural reform effort. The Medicare benefit package, largely designed in 1965, provides virtually no outpatient drug coverage. Beneficiaries may fill this coverage gap in various ways. According to the Medicare Current Beneficiary Survey, nearly two-thirds of Medicare beneficiaries had some form of drug coverage from a supplemental insurance policy, health plan, or public program at some point during 1999. All beneficiaries have the option to purchase supplemental policies—Medigap—when they first become eligible for Medicare at age 65. Those policies that include drug coverage tend to be expensive and provide only limited benefits. Some beneficiaries have access to coverage through employer-sponsored policies or private health plans that contract to serve Medicare beneficiaries. In recent years, coverage through these sources has become more expensive and less widely available. Beneficiaries whose incomes fall below certain thresholds may qualify for Medicaid or other public programs. More than one-third may lack drug coverage altogether. In recent years, prescription drug expenditures have grown substantially, both in total and as a share of all heath care outlays. Prescription drug spending grew an average of 15.9 percent per year from 1996 to 2001, more than double the 6.5 percent average growth rate for health care expenditures overall. (See table 1.) As a result, prescription drugs account for a growing share of health care spending, rising from 6.5 percent in 1996 to 9.9 percent in 2001. By 2012, prescription drug expenditures are expected to account for almost 15 percent of total health expenditures. In 2002, CBO projected that the average Medicare beneficiary would use $2,440 worth of prescription drugs in 2003. This is a substantial amount considering that some beneficiaries lack any drug coverage and others may have less coverage than in previous years. Moreover, significant numbers of beneficiaries have drug expenses much higher than those of the average beneficiary. CBO also estimated that, in 2005, 12 percent of Medicare beneficiaries would have expenditures above $6,000. In focusing on the need for prescription drug coverage, we should not forget that Medicare does not provide complete protection from catastrophic losses. Under Medicare, beneficiaries have no limit on their out-of-pocket costs attributable to cost sharing. The average beneficiary who obtained services had a total liability for Medicare-covered services of $1,700, consisting of $1,154 in Medicare copayments and deductibles in addition to the $546 in annual part B premiums in 1999, the most recent year for which data are available on the distribution of these costs. For beneficiaries with extensive health care needs, the burden can be much higher. In 1999, about 1 million beneficiaries were liable for more than $5,000, and about 260,000 were liable for more than $10,000 for covered services. In contrast, employer-sponsored health plans for active workers typically limited maximum annual out-of-pocket costs for covered services to less than $2,000 per year for single coverage. Recently, several proposals have been made to add a prescription drug benefit to the Medicare program. While different in scope and detail, the proposals have certain features in common—including use of a third-party entity to administer the new drug benefit. The remainder of my remarks will focus on the lessons learned from our work regarding the private sector’s use of such an entity to manage the drug benefits of insurers’ policyholders and health plans’ enrollees. Some proposals to add a Medicare outpatient prescription drug benefit look to private sector strategies as a means to administer a drug benefit and control costs. Most employer-sponsored health plans contract with private entities, known as pharmacy benefit managers (PBM), to administer their prescription drug benefits, and those that do not contract with PBMs may have units in their organizations that serve the same administrative purpose. Typically, on behalf of the health plans, PBMs negotiate drug prices with pharmacies, negotiate rebates with drug manufacturers, process drug claims, operate mail-order pharmacies, and employ various cost-control techniques, such as formulary management and drug utilization reviews. In 2001, nearly 200 million Americans had their prescription drug benefits administered through PBMs. This year, we reported on the use of PBMs by health plans in the Federal Employees’ Health Benefits Program (FEHBP). In considering the application of these findings to Medicare, we are reminded that Medicare’s unique role and nature may temper how the strategies and potential efficiency gains afforded by private sector PBMs may be transferred to benefit the program. PBMs use purchasing volume to leverage their negotiations with pharmacies and drug manufacturers in seeking favorable prices in the form of discounts, rebates, or other advantages. Through negotiations, PBMs create networks of participating retail pharmacies, promising the pharmacies a greater volume of customers in exchange for discounted prices. PBMs may be able to secure larger discounts by limiting the number of network pharmacies. However, smaller networks provide beneficiaries fewer choices of retailers, thereby limiting convenient access. These are trade-offs health plans must consider in deciding how extensive a pharmacy network they want their PBMs to offer beneficiaries. The health plans we reviewed in our FEHBP study generally provided broad retail pharmacy networks. The average discounted prices PBMs obtained for drugs from retail pharmacies were about 18 percent below the average prices cash-paying customers without drug coverage would have paid for 14 selected widely used brand-name drugs. For 4 selected generic drugs, the PBM-negotiated retail pharmacy prices were 47 percent below the price paid by cash-paying customers. PBMs also use their leverage to negotiate with drug manufacturers for rebates. Rebates generally depend on the volume of a manufacturer’s products purchased. Health plans and PBMs can add to that volume by concentrating beneficiaries’ purchases for particular types of drugs with certain manufacturers. Health plans can steer their beneficiaries’ purchases to specific drugs through the use of a formulary—that is, a list of prescription drugs that health plans encourage physicians to prescribe and beneficiaries to use. Determining whether a drug should be on the formulary involves clinical evaluations based on a drug’s safety and effectiveness, and decisions on whether several drugs are therapeutically equivalent. Restricting the formulary to fewer drugs within a therapeutic class can provide the PBMs with greater leverage in negotiating higher rebates because they can help increase the manufacturer’s market share for certain drugs. However, a restricted formulary provides beneficiaries with fewer preferred drug alternatives and makes the policies governing coverage of nonformulary drugs or the cost sharing for them critical to beneficiaries. The FEHBP plans and PBMs we reviewed provided enrollees with generally nonrestrictive drug formularies across a broad range of drugs and therapeutic categories. The manufacturer rebates that the PBMs passed through to the FEHBP plans effectively reduced plans’ annual spending on prescription drugs by a range of 3 percent to 9 percent. The share of rebates PBMs passed through to the FEHBP plans varied subject to contractual agreements negotiated between the plans and the PBMs. PBMs also assisted the FEHBP plans by providing a less expensive mail- order drug option. Mail-order prices for the FEHBP plans we reviewed averaged about 27 percent lower than cash-paying customers would pay for the same quantity at retail pharmacies for 14 brand-name drugs and 53 percent lower for 4 generic drugs. The FEHBP plans generally had lower cost-sharing requirements for drugs purchased through mail order, particularly for more expensive brand-name drugs or maintenance medications for chronic conditions. The claims and information processing capabilities PBMs offered also helped the FEHBP plans to manage drug costs and monitor quality of care. PBMs maintain a centralized database on each enrollee’s drug history that can be used to review for potential adverse drug interactions or potentially less expensive alternative medications. They also use claims data to monitor patterns of patient use, physician prescribing practices, and pharmacy dispensing practices. Their systems provide “real-time” claims adjudication capabilities that allow a customer’s claim for a drug purchase to be approved or denied at the time the pharmacist begins the process of filling a prescription. Two plans in our FEHBP study reported savings ranging from 6 to 9 percent of the plan’s annual drug spending; the savings were associated primarily with real-time claims denials preventing early drug refills and safety advisories cautioning pharmacists about potential adverse interactions or therapy duplications. While Medicare’s sheer size would provide it with significant leverage in negotiating with pharmacies and drug manufacturers, doing so would represent a departure from traditional Medicare. Medicare beneficiaries represent less than 15 percent of the population but a disproportionately higher share—about 40 percent—of prescription drug spending. However, because of Medicare’s design and obligations as a public program, its current purchasing strategies vary considerably from those of the private sector. Any willing provider. In contrast with private payers’ reliance on selective contracting with providers and suppliers, the traditional Medicare program has generally allowed any hospital, physician, or other provider willing to accept Medicare’s reimbursements and requirements to participate in the program. With respect to drug purchasing in particular, private plans determine the extent of their enrollees’ access by the choices they make about the size of their participating pharmacy network and breadth of their drug formulary. Allowing any pharmacy willing to meet Medicare’s terms to participate or allowing all therapeutically equivalent drugs equal coverage on a formulary would restrict the program’s ability to secure advantageous prices. Moreover, health plans and PBMs currently make formulary determinations privately. In contrast, Medicare’s policies have historically been open to public comment. Administrative rate-setting. Whereas private health plans typically rely on price negotiations to establish payment rates, Medicare generally establishes payment rates administratively. As discussed earlier, Medicare’s rates often exceed market prices and this is the case for some of the few outpatient prescription drugs covered by Medicare. The program’s method of paying for these drugs is prescribed in statute: In essence, Medicare pays 95 percent of a drug’s “average wholesale price” (AWP). Despite its name, however, AWP is not necessarily a price that wholesalers charge and is not based on the price of any actual sale of drugs by a manufacturer. AWPs are published by manufacturers in drug price compendia, and Medicare bases providers’ payments on these published AWPs. Other public and private purchasers typically use the leverage of volume and competition to secure better prices. By statute, Medicaid, the nation’s health insurance program for certain low-income Americans, is guaranteed manufacturers’ rebates based on prices charged other purchasers. Certain other public payers can pay at rates set in the federal supply schedule, which uses verifiable confidential information on the prices drug manufacturers charge their “most favored” private customers. Manufacturers agree to these prices, in part, in exchange for the right to sell drugs to the more than 40 million Medicaid beneficiaries. Low-budget program administration. Duplicating the type of controls PBMs have exercised over private-sector drug benefits would likely involve devoting a larger share of total expenditures to administration than is spent by Medicare currently. Medicare’s administrative costs historically have been extremely low, averaging about 2 percent of the cost of the services themselves. This level of expenditure may not be consistent with the level needed to review the volumes of claims data associated with prescription drugs for the elderly or acquire and maintain the on-line systems and databases PBMs use to employ such utilization controls as real-time claims adjudication. The number of prescriptions for Medicare beneficiaries could easily exceed the current number of claims for all other services combined, or over 1 billion annually. Medicare would undoubtedly need assistance from external entities to administer a drug benefit, just as it has used insurers to process claims in the traditional program and Medicare+Choice plans to go further by also managing services and assuming risk. Decisions about the roles assigned an entity or entities and the latitude allowed them in carrying out those roles would be critical. These decisions would undoubtedly affect the benefit’s value to beneficiaries and the efficiencies and savings secured for both beneficiaries and taxpayers. Some of these decisions parallel those made by FEHBP plans that I discussed—trade-offs about beneficiaries’ interests in broad pharmacy networks and formularies versus potential savings. Others stem from the uniqueness of Medicare, its likely disproportionate share of the drug market, and its position as a public program requiring transparency and fairness. Insurers and PBMs have been successful in securing some savings on drug purchases by leveraging their volume to move market share from one product to another. Medicare’s leverage, given that purchases by the elderly constitute about 40 percent of the drug market, could be considerable. Yet the large market share may also be likely to attract considerable attention. The administration of a Medicare drug benefit could then be subject to the same intensity of external pressures from interested parties regarding program prices and rules that can often inhibit the program from operating efficiently today. The potential for micromanagement could compromise trying to use the very flexibility PBMs have employed in negotiating prices and selecting preferred providers in order to generate savings. An alternative would be to sacrifice some of the program’s leverage and grant flexibility to multiple PBMs or similar entities so that any one entity would be responsible for administering only a share of the market. Contracting with multiple PBMs or similar entities, however, would pose other challenges. If each had exclusive responsibility for a geographic area, beneficiaries who wanted certain drugs could be advantaged or disadvantaged merely because they lived in a particular area. To minimize inequities, Medicare could, like some private sector purchasers, specify core benefit characteristics or maintain clinical control over formulary decisions instead of delegating those decisions to its contractors. If multiple PBMs or similar entities operated in a designated area, beneficiaries could choose among them to administer their drug benefits. These organizations would compete for consumers directly on the basis of differences in their drug benefit offerings and administration. This contrasts with the private sector where drug benefits are typically part of an overall insurance plan, and PBMs typically compete for contracts with insurers or other purchasers. Competition could be favorable to beneficiaries if they were adequately informed about differences among competing entities offering drug benefits and shared in the savings. However, adequate oversight would need to be in place to ensure that fair and effective competition was maintained. For example, a means to ensure that beneficiaries received comprehensive user-friendly information about policy and benefit differences among competing entities would be necessary. Monitoring marketing and customer recruitment strategies and holding entities accountable for complying with federal requirements would require adequate investment. The contracting entities could need protections as well. Some mechanism would be needed to risk adjust payments for differences in beneficiaries’ health status so that those entities enrolling a disproportionate share of high-use beneficiaries would not be disadvantaged. Medicare’s financial challenge is very real and growing. The 21st century has arrived and our demographic tidal wave is on the horizon. Within 5 years, individuals in the vanguard of the baby boom generation will be eligible for Social Security and 3 years after that they will be eligible for Medicare. The future costs of serving the baby boomers are already becoming a factor in CBO’s short-term cost projections. Frankly, we know that incorporating a prescription drug benefit into the existing Medicare program will add hundreds of billions of dollars to program spending over just the next 10 years. For this reason, I cannot overstate the importance of adopting meaningful reforms to ensure that Medicare remains viable for future generations. Adding a drug benefit to Medicare requires serious consideration of how that benefit will affect overall program spending. If competing private entities are to be used to administer a drug benefit, it is important to understand how these entities can be used in the Medicare context to provide a benefit that balances beneficiary needs and cost containment. Medicare reform would be done best with considerable lead time to phase in changes and before the changes that are needed become dramatic and disruptive. Given the size of Medicare’s financial challenge, it is only realistic to expect that reforms intended to bring down future costs will have to proceed incrementally. We should begin this now, when retirees are still a far smaller proportion of the population than they will be in the future. The sooner we get started, the less difficult the task will be.
The House Committee on Ways and Means is holding a hearing on modernizing Medicare and integrating prescription drugs into the program. There are growing concerns about gaps in the Medicare program, most notably the lack of outpatient prescription drug coverage, which may leave Medicare's most vulnerable beneficiaries with high out-of-pocket costs. At the same time, Medicare already faces a huge projected financial imbalance that has worsened significantly in the past year. This statement discusses the challenges of adding a drug benefit to Medicare in the context of the program's current and projected financial condition. It also examines program design issues to be considered with respect to administering any proposed drug benefit. Specifically, it discusses how private sector health plans have used entities called pharmacy benefit managers (PBM) to control drug benefit expenditures. The recent publication of the 2003 Medicare Trustees' annual report reminds us that Medicare in its current condition--without a prescription drug benefit--is not sustainable. At the same time there are growing concerns about gaps in the Medicare program, most notably the lack of outpatient prescription drug coverage, that may leave Medicare's most vulnerable beneficiaries with high out-of-pocket costs. The Hospital Insurance (HI) portion of Medicare faces a huge projected financial imbalance that has worsened significantly in the past year. Under the Trustees' 2003 intermediate estimates, the present value of HI's actuarial deficit is $6.2 trillion--a 20 percent increase over the prior year. Beginning in 2013, HI's program outlays are expected to begin to exceed program tax revenues, putting increased pressure on the federal budget to raise the resources necessary to meet program costs. In addition, Supplementary Medical Insurance is projected to place an increasing burden on taxpayers and beneficiaries. GAO's long-term budget simulations show that, absent meaningful entitlement reforms, demographic trends and rising health care spending will drive escalating federal deficits and debt. Neither slowing the growth of discretionary spending nor allowing the 2001 tax reductions to sunset will eliminate the imbalance. While additional economic growth will help ease our burden, the potential fiscal gap is too great to grow our way out of the problem. The application of basic health insurance principles to any proposed benefit could help moderate the cost for both beneficiaries and taxpayers. These include beneficiary protections against the risk of catastrophic medical expenses and premium contributions and cost-sharing arrangements that encourage beneficiaries to be cost conscious. The private sector's use of PBMs to control drug expenditures may be instructive for Medicare, but the program's unique role and nature may moderate how such entities would be used and the potential efficiency gains afforded in attempting to transfer PBM-like strategies to Medicare.
The H-1 nonimmigrant category was created under the Immigration and Nationality Act of 1952 to assist U.S. employers needing workers temporarily. The Immigration Act of 1990 amended the law, by, among other things, creating the H-1B category for nonimmigrants who employers sought to work in specialty occupations and fashion modeling.Unlike most temporary worker visa categories, H-1B workers can intend to both work temporarily and to immigrate permanently at some future time. Employed H-1B workers may stay in the United States on an H-1B visa for up to 6 years. Until 1990, there was no limit on the number of specialty occupation visas that could be granted to foreign nationals. Through the Immigration Act of 1990, Congress set a yearly cap of 65,000 on H-1B visas. In an effort to help employers access skilled foreign workers and compete internationally, the Congress passed the American Competitiveness and Workforce Improvement Act of 1998, which increased the limit to 115,000 for fiscal years 1999 and 2000. In 2000, Congress passed the American Competitiveness in the Twenty-First Century Act, which raised the limit to 195,000 for fiscal year 2001 and maintained that level through fiscal years 2002 and 2003. The limit is scheduled to revert back to 65,000 in fiscal year 2004. In order to hire H-1B employees, employers must first file a Labor Condition Application (LCA) with Labor, attesting to the fact that the employer intends to comply with a number of required labor conditions designed to protect workers. On this application, an employer must state the number of workers requested, the occupation and location(s) in which they will work, and the wages they will receive. The employers must attest, among other things, that: the employment of H-1B workers will not adversely affect the working conditions of other workers similarly employed in the area; the H-1B workers will be paid wages that are no less than the higher of the actual wage level paid by the employer to all others with similar experience and qualifications for the specific employment or the prevailing wage level for the occupational classification in the area of intended employment; and no strike, lockout, or work stoppage in the applicable occupational classification was underway at the time the application was prepared. H-1B dependent employers (generally those with a workforce consisting of at least 15 percent H-1B workers) and willful violators (employers who have been found in violation of the conditions of an earlier LCA) are subject to additional requirements. These employers must also attest that: before filing an LCA, the employer will make a good faith effort to recruit U.S. workers for the position, offering wages at least as great as that required to be offered to the foreign national; the employer will not displace and did not displace any similarly employed U.S. workers within 90 days prior to or after the date of filing any H-1B visa petition; and before placing the H-1B employee with another employer, the current employer will inquire whether or not the other employer has displaced or intends to displace a similarly employed U.S. worker within 90 days before or after the new placement of the H-1B worker. After Labor approves the LCA, an employer who wishes to hire an H-1B worker can file two types of petitions with BCIS to obtain approval.“Initial” petitions are those that are filed for a foreign national’s first-time employment in the United States and allow for the H-1B worker to stay in the United States for 3 years. With some exceptions, these petitions are counted against the annual cap on the number of H-1B petitions that may be approved. “Continuing” employment petitions are filed for: extensions of the initial petitions for another 3 years, the maximum period permissible under the law; sequential employment, which occurs, for example, when an H-1B worker changes employers within their 6-year time period; and concurrent employment, in which the H-1B worker intends to work simultaneously for a second or subsequent employer. Continuing petitions do not count against the cap. In both fiscal years 2001 and 2002, the number of initial H-1B petitions approved that applied to the cap did not reach the annual limit of 195,000 (see fig. 1). In fiscal year 2001, 163,600 petitions were approved against the cap. The number of approved petitions decreased by more than 50 percent in one year, with 79,100 petitions approved against the cap in fiscal year 2002. This recent change contrasts with the trends from fiscal years 1997 through 2000, during which time the cap was lower and the number of petitions reached or exceeded the annual limit. DHS is responsible for managing the entry and departure of nonimmigrants, including H-1B workers. To enhance DHS’s ability in this regard, legislation was enacted that required the agency to develop an automated entry/exit control system. Section 110 of the Illegal Immigration Reform and Immigrant Responsibility Act (IIRIRA) of 1996 required that this system collect departure records from every foreign national leaving the United States and match it with arrival records. The act also required that the system have the capability to assist DHS officials in identifying nonimmigrants who have been in the United States beyond their authorized period of stay. The Immigration and Naturalization Service Data Management Improvement Act of 2000 (DMIA) replaced section 110 of IIRIRA in its entirety. The DMIA, among other things, required that the entry/exit system integrate arrival and departure information on foreign nationals required under IIRIRA and contained in the Department of Justice (now DHS) and Department of State databases. DMIA also required that this system be fully implemented by December 31, 2005. Subsequent legislation required that the entry/exit control system must be capable of interfacing with other law enforcement agencies’ systems. In 2001, Congress passed legislation that allowed H-1B workers “visa portability” – the ability to change employers during their stay once the new employer files an H-1B petition on their behalf. According to the law, the petition for new employment must have been filed before the end of the worker’s period of authorized stay. DHS has the authority to issue regulations that further specify how visa portability will be administered. In March 2001, when the economy began to decline, U.S. employment declined as well, with 1.4 million jobs lost during the year. The unemployment rate rose to 5.8 percent at the end of 2001 and hovered between 5.5 and 6 percent throughout 2002. Although downturns tend to affect sectors throughout the economy, existing research indicates that job loss from 2001-2002 was particularly severe in IT manufacturing, a sub- sector in which many H-1B workers were employed. Concerns that the H-1B program might have unfairly impacted U.S. workers during the recent economic downturn have prompted labor groups to raise questions about the use of the H-1B program. Associations representing U.S. workers that we spoke with believe that employers abuse the program by laying off U.S. workers while retaining and hiring H-1B workers at lower wages. Such practices, according to employee associations, had the effect of displacing U.S. workers during the economic downturn. Labor representatives argue that some employers force H-1B workers to work for lower wages than U.S. citizen workers, knowing that continued employment is the only legal way for H-1B workers to remain in the United States. One advocate for H-1B workers said that some employers dangle the possibility of sponsorship for permanent residency in front of H-1B workers as a reward for extra work. These representatives believe that visa portability options do not actually give H-1B workers more freedom to move around in the labor market, arguing that H-1B workers are still dependent on their employers to legally remain in the United States. On the other hand, associations representing employers argue that H-1B workers were not treated differently than U.S. workers during the economic downturn, and that use of the H-1B program by employers has decreased substantially. They also argue that the real challenge to U.S. workers occurs when companies rely on workers overseas where the work can be done at a lower cost. H-1B beneficiaries were approved to fill a wide variety of occupations, and the number of H-1B petition approvals in certain occupations has generally declined with the economic downturn, along with the employment levels of U.S. citizen workers in these occupations. In contrast with patterns in 2000, most H-1B beneficiaries in 2002 were approved for positions that were not related to IT. Moreover, a comparison of H-1B beneficiaries and U.S. citizen workers in five occupations (electrical/electronic engineers, systems analysts/programmers, biological/life scientists, economists, and accountants/auditors) revealed that, in most of these occupations, H-1B beneficiaries in 2002 were younger and a higher percentage had a graduate or professional degree. In the three occupational groups for which there were sufficient data to compare salaries (electrical/electronic engineers, systems analysts/programmers, and accountants/auditors), salaries listed on petitions for younger H-1B beneficiaries (18-30 years old) approved in 2001 who did not have advanced degrees were higher than salaries reported by U.S. citizen workers of the same age group and education level; however, salaries listed on petitions for older H-1B beneficiaries (31- 50 years old) were either similar or lower than the salaries reported by their U.S. counterparts. Both the number of H-1B petition approvals and U.S. citizens employed in certain occupations decreased from 2001 to 2002. In 2002, H-1B beneficiaries were approved to fill over 100 occupations, but IT occupations were no longer the majority of approved occupations, as they were in 2000 (see table 1). A large proportion of approved petitions were for fields unrelated to IT, such as university education, economics, and medicine. However, IT-related occupations still constituted 40 percent of all petitions approved in 2002 for H-1B beneficiaries, most prominently, in systems analysis and programming (31 percent). Nine percent were in electrical/electronic engineering and other IT-related fields. In 2000, the pattern was different: 65 percent of all approved petitions were for IT- related positions. In most of the five occupations we examined (electrical/electronic engineers, systems analysts/programmers, biological/life scientists, economists, and accountants/auditors), H-1B beneficiaries with petitions approved in 2002 were younger and a higher percentage had an advanced degree than the population of U.S. citizen workers in 2002. H-1B beneficiaries with petitions approved in 2002 were younger than U.S. citizen workers in four of the five occupations: electrical/electronic engineers, systems analysts/programmers, economists, and accountants/auditors (see fig. 2). For example, the median age of H-1B beneficiaries approved for accountant/auditor positions was 32, which was substantially younger than the median age of 38 for U.S. citizen accountants/auditors. The largest difference between the median ages, about 9 years, was for U.S. citizens and H-1B beneficiaries approved for electrical/electronic engineer positions. We found no significant difference in the median ages of H-1B beneficiaries and U.S. citizens in biological/life scientist positions. In the three occupational groups (electrical/electronic engineers, systems analysts/programmers, and accountants/auditors) for which there were sufficient data to compare education levels, a higher percentage of H-1B beneficiaries with petitions approved in 2002 had earned a graduate or professional degree than U.S. citizen workers (see fig. 3). For example, 50 percent of H-1B beneficiaries approved to fill electrical/electronic engineer positions had graduate degrees, compared with 20 percent of U.S. citizen electrical/electronic engineers. Insufficient data precluded us from analyzing the education levels of U.S. citizen biological/life scientists and economists. The salaries of H-1B beneficiaries and U.S. citizen workers differed from each other when examined in relation to their education levels and age.In the three occupational groups (electrical/electronic engineers, systems analysts/programmers, and accountants/auditors) where there were sufficient data to compare salaries by age and education level, in 2001, salaries listed on petitions for H-1B beneficiaries were higher (by about $7,000 - $10,000) than salaries reported by U.S. citizen workers, for those who were 18-30 years of age and did not have graduate degrees (see fig. 4). In contrast, salaries listed on petitions for H-1B beneficiaries approved for either electrical/electronic engineer or systems analyst/programmer positions who were 31-50 years of age and had graduate degrees were lower (by about $11,000 - $22,000) than salaries reported by U.S. citizens with the same characteristics. In addition, salaries listed on petitions for H-1B beneficiaries approved for electrical/electronic engineer positions who were 31-50 years old and did not have graduate degrees were lower (by about $5,000) than salaries reported by their U.S. counterparts. There were no significant differences between the annual salaries of 31-50 year­ olds in all other cases shown in figure 4. Insufficient data precluded us from making determinations about the relationship of age and education to the salaries of H-1B beneficiaries and U.S. citizens who were 18-30 year­ olds with graduate degrees, or those who were in economist or biological/life scientist positions. (See table 7 in app. II for more details.) In addition to the factors we examined, a number of other factors can affect earnings, such as years of experience and geographic location. However, BCIS does not collect data on years of experience or geographic location for H-1B beneficiaries. Almost one-third of H-1B beneficiaries with petitions approved in 2002 were born in India, with the second highest percentage of H-1B beneficiaries born in China, followed by Canada, the Philippines, and the United Kingdom (see fig. 5). The remaining 45 percent of H-1B beneficiaries represented an array of roughly 200 other countries. After reaching a high level in 2001, the number of H-1B petition approvals has recently declined substantially. The numbers of both initial and continuing petitions approved increased from 2000 to 2001 and declined well below 2000 levels in 2002, as shown in figure 6. The decline in petition approvals for systems analysis/programming positions constituted 70 percent of the decline in the total number of petition approvals from 2001 to 2002. For each of the 3 years, a larger number of initial petitions were approved than continuing petitions. From 2000 to 2001, the estimated numbers of H-1B petition approvals and U.S. citizens employed in most of the five occupations we examined increased significantly (see table 2). For example, the number of petitions approved in biological sciences positions increased by 1,685 to 5,454, and employment for U.S. citizen biological/life scientists increased by 14,448 to 59,511. However, as U.S. citizen employment declined from 2001 to 2002, so did the number of H-1B petition approvals (see table 2). In particular, H- 1B petition approvals and U.S. citizen employment decreased in IT occupations. For example, the number of H-1B petition approvals for systems analysis/programming positions dropped by 106,671 to 56,184, and the estimated number of U.S. citizen systems analysts/programmers employed decreased by 147,005 to 1,577,427. All 36 employers that we interviewed said they made hiring and layoff decisions about workers by selecting and retaining candidates with the skill sets needed for the job, and the majority (19) of employers said that they did not treat H-1B workers differently when making these decisions. Most of the employers who said immigration status was a factor in their decisions noted that they hired H-1B workers only when qualified U.S. workers were not available. Despite increases in unemployment among highly skilled U.S. workers, about two-thirds of employers said that finding workers with the skills needed in certain engineering and other science- related occupations remains difficult. Employers who laid off workers said that these decisions were based on whether the employee had the skills that the business needed for the future. While employers cited disadvantages to the H-1B program, such as cost and lengthy petition processing times, they said they would continue to use the program to meet skill needs. Some employers said that they hired H-1B workers in part because these workers would often accept lower salaries than similarly qualified U.S. workers; however, these employers said they never paid H-1B workers less than the required wage. Labor is responsible for enforcing H-1B wage agreements and has continued to find instances of employers paying H-1B workers less than the wages required by law, but the full extent to which such violations occur is unknown. Most of the information in this section is based on our interviews with employers of H-1B workers. We contacted 145 employers to discuss issues related to the H-1B program, and 36, or 25 percent, of the employers agreed to speak with us. Therefore, our results may be affected by this self-selection and cannot be viewed as representative of all H-1B employers. All employers interviewed said that finding qualified workers with the needed skill sets was the main factor in recruiting and hiring candidates, and the majority (19) of the 36 employers said that H-1B candidates were not treated differently in the recruiting and hiring process. Several employers mentioned that they were looking for experienced workers and that qualified candidates often had a minimum of 2 to 3 years of relevant work experience. These employers said their need to remain competitive prevented them from spending time to train workers who did not have the necessary skills. In addition to the need for technical skills and experience, employers that hired for consulting positions—in which workers are sent to different job locations or relocated frequently—said that flexibility was an important consideration in hiring decisions. These employers said that H-1B workers, having moved to the United States from another country, were very flexible in moving within the United States. Many employers told us that immigration status was a factor in their decision-making when they looked for candidates with experience in particular skill sets. Most of these employers said that they looked at available U.S. workers before considering applicants that required H-1B visa sponsorship and that they hired H-1B workers only when there were no qualified U.S. workers available. One company that hired H-1B workers primarily for product development engineering said that company policy states that H-1B workers can only be hired after managers conduct rigorous and unsuccessful searches for qualified U.S. candidates. Other companies told us that because of the costs of processing and legal fees, they hired candidates requiring H-1B sponsorship as a last resort. Six employers cited the cost of U.S. labor as another factor in employment decisions. While these employers said that they never paid H-1B workers salaries below the prevailing wage, they did acknowledge that H-1B workers were often prepared to work for less money than U.S. workers. These employers said that they could not compete with the large salaries offered to U.S. workers by the major IT and pharmaceutical companies. These employers also told us that they had to recruit overseas because U.S. workers either demanded salaries that were too high or were already employed with other companies. A number of employers interviewed acknowledged that some H-1B workers coming directly from other countries might initially have accepted an offer with lower pay, but that it would have been unwise for employers to pay these workers less than their U.S. counterparts because they would soon leave for a higher wage offered by a different employer. Half of the employers we interviewed said they did not recruit overseas for U.S. positions, but instead recruited workers through a variety of methods, including employee referrals, the Internet, and outreach at U.S. graduate schools. These employers said that they used the same methods to recruit H-1B candidates and U.S. workers. Employee referrals and job boards on the Internet were the most commonly cited recruiting methods. Several employers noted that many H-1B workers were hired through referrals by other workers already employed by their companies. In addition, about two-thirds of employers said that most H-1B workers hired were already in the United States attending graduate schools on student visas or working for another employer on an H-1B visa. Many of the employers interviewed said that they recruited overseas for U.S. positions before the recent economic downturn because they could not find enough qualified U.S. workers. However, most of these employers said they have not recruited overseas for these positions since the downturn. One employer cited the anticipation of Year 2000 computer problems as a major factor in recruiting overseas, claiming the company needed workers who were skilled in programming older mainframe systems, whereas available U.S. workers were experienced in more advanced technologies. Many of the employers interviewed reported that there is a greater supply of workers for certain IT positions (e.g., systems analysts and programmers) since the economic downturn, but also said they have substantially reduced their hiring since the economic downturn and have cut back on their use of the H-1B program. Of the 36 employers we interviewed, about two-thirds said that despite the increase in the number of unemployed workers since the economic downturn, finding qualified workers in some engineering and other science-related occupations remains difficult. These employers told us that they look for superior candidates or those who are in fields with a smaller pool of qualified candidates, such as chemists. One Internet company said that it is difficult to hire the most productive workers because such top performers are unlikely to be looking for work. Four employers said they were looking for candidates with unique skills. For example, one employer told us that foreign workers who helped develop products overseas were the most qualified to help introduce those products to the U.S. market. Thirty of the 36 employers interviewed experienced layoffs, and all 30 said that the layoffs were based on whether the employees had the skill sets that the business would need in the future, regardless of their immigration status. Seven of these 30 employers also added that employee performance was a major consideration in layoff decisions. Several companies said that layoffs were due to positions being eliminated or decisions to close offices in certain locations. However, some companies said that if they were eliminating a product line or regional office, employees—whether H-1B workers or U.S. citizens—would be transferred to another division or product line if their skills were needed. All 30 employers said that H-1B status was not a factor in these decisions, and 19 of these employers reported that they had laid off H-1B workers. According to a few employers, H-1B workers were often the last to be released because they frequently work in research and development positions that create new products or other areas of the business that generate revenue. Details about the number of workers laid off by employers were not publicly available, and most employers declined to share this information with us. Labor associations argue that U.S. workers are being displaced by H-1B workers whom employers view as a more affordable source of labor. These groups cited anecdotal accounts of employers laying off U.S workers and then retaining or hiring H-1B workers for the same positions or outsourcing the work to companies using foreign labor. In the case of H-1B dependent employers, the law prohibits companies from hiring H-1B workers when it has the effect of displacing similarly employed U.S. workers in the workforce. Although Labor has found no instances of such illegal displacement by H-1B dependent employers, a few cases are currently under investigation. Nearly all employers interviewed said that the length of time required to process petitions is a major disadvantage of the H-1B program. About half of these employers said that hiring an H-1B worker could take from 2 to 6 months, but that they often pay an additional $1,000 fee for premium processing, which substantially reduces processing time. In addition, most employers interviewed said that the combination of processing fees and legal fees made the program very costly, with costs cited ranging from $2,500 to $8,000 to hire an H-1B worker. Citing their need to fill permanent positions, some employers noted that the main disadvantage of the H-1B program is its temporary provision of labor. These employers said they experience a substantial loss of intellectual capital when an H-1B visa has expired and a foreign national is forced to leave the United States. Nearly all employers interviewed said that in order to retain these foreign workers, they often sponsored H-1B workers for permanent residency either as part of their initial employment offer or after a certain period of employment. Some of these employers said that the fees associated with applications for permanent residency can raise the cost of hiring an H-1B worker substantially, with a few citing costs as high as $10,000 to $15,000. A few companies said that if their H-1B workers were unable to obtain permanent residency, they would send them to one of their foreign offices for a year and then bring them back to the United States on new H-1B visas. Despite the disadvantages of the H-1B program cited, 31 of the 36 employers interviewed said they would continue to use the program in the future to meet skill needs. These employers believe that once the economy recovers it will be difficult to find enough qualified U.S. workers, and that the H-1B program gives them the opportunity to access a larger pool of workers. Of the 24 employers that commented on the H-1B cap, 16 said they were concerned that a limit of 65,000 would create processing backlogs at BCIS when the economy improves, and feared that they would have to wait several months longer to hire H-1B workers, as was the case when the cap was reached in 2000. While employers said that they would continue to use the H-1B program, a few employers mentioned that they are seeking additional visa options for bringing highly skilled workers to the United States. For example, in recent years, employers have increasingly turned to the L-1 visa, an intracompany transfer visa that can be used by companies to bring their foreign professional workers to the United States on a temporary basis (see fig. 7). L-1 visas do not have an annual cap and are not subject to prevailing wage laws. Department of State statistics show that the use of L-1 visas has increased substantially since fiscal year 1998. The number of L-1 visas issued in fiscal year 1998 was 38,307 and rose to 41,739 in fiscal year 1999, peaked in fiscal year 2001 at 59,384, and decreased slightly in fiscal year 2002 to 57,721. Eight companies noted that the process to obtain an L-1 visa was less cumbersome than the H-1B visa process, and a few said that they planned to increase use of the L-1 visa in the future. In addition to using other visas, some employers said that they are now considering outsourcing work or moving their own operations offshore to remain competitive. A few employers said that if they cannot find enough highly skilled workers within the United States, they would start operating overseas. One offshore IT services company said its competitive advantage comes from offering U.S. clients IT services in India, which can significantly reduce costs. According to a temporary staffing agency, some companies are increasingly using contract or temporary staff as a way of cutting labor costs and avoiding the bad publicity associated with layoffs. While a number of employers acknowledged that some H-1B workers might accept lower salaries than U.S. workers, the extent to which wage is a factor in employment decisions is unknown. Labor’s Wage and Hour Division (WHD), which is responsible for ensuring that H-1B workers are receiving legally required wages, has continued to find instances of program abuse. As shown in table 3, the number of investigations in which violations were found doubled from fiscal year 2000 to 2002, and the amount of back wages owed to H-1B workers by employers increased from $1.6 million in fiscal year 2000 to $4.2 million in fiscal year 2002. These violations were largely due to employers bringing H-1B workers into the United States to work, but not paying them any wages until jobs are available, according to WHD officials. This dramatic increase in violations and back wages owed to H-1B workers may be due to the increase in the number of H-1B workers who have entered the country over the years and does not necessarily indicate an increase in the percentage of H-1B workers affected by wage violations. The extent to which violations of the H-1B program take place is unknown and may be due in part to WHD’s limited investigative authority. WHD can initiate H-1B-related investigations only under limited circumstances. WHD may investigate (1) when a complaint is filed by an aggrieved person or organization, such as an H-1B worker, a U.S. worker, or the employee bargaining representative; (2) on a random basis, employers, who, within the previous 5 years, have been found to have committed a willful failure to meet LCA work conditions; and (3) if it receives specific credible information from a reliable source (other than the complainant) that the employer has failed to meet certain specified work conditions. According to WHD officials, H-1B workers may be reluctant to complain, given their dependency upon their employers for continued residency in the United States. In 2000, we suggested that the Congress consider broadening Labor’s enforcement authority to improve its ability to conduct investigations under the H-1B program. In response, Labor concurred with our suggestion, indicating that it has long urged that the Congress reconsider and expand the narrow limits on its enforcement authority. Little is known about the status of H-1B workers due to the limitations of current DHS tracking systems, but new systems to provide more comprehensive information are being developed. One reason DHS is unable to determine the number of H-1B workers who are in the United States at a given time is because it has two separate tracking systems that do not share data. The Non-Immigrant Information System (NIIS) has data on entries and departures of H-1B workers and the Computer Linked Application Information Management System 3 (CLAIMS 3) has data on changes in visa status, but data from both of these systems are needed to calculate the number of H-1B workers in the United States. In addition, while DHS collects information on departures, change of visa status, and occupations performed under a new status, this information is not consistently collected and entered into current systems. DHS has recognized the need for more comprehensive immigration data and is working to develop improved tracking systems. One system, known as the U.S. Visitor and Immigrant Status Indicator Technology System (US- VISIT), is intended to incorporate data managed by DHS as well as other agencies to provide a foreign national’s complete immigration history. System plans also provide for capabilities to generate aggregated reports on foreign nationals. In addition to information systems issues, we also determined that DHS’s ability to provide information on H-1B workers is limited because it has not issued consistent guidance or any regulations on the legal status of unemployed H-1B workers who remain in the United States while seeking new jobs. The lack of clear guidance or any regulations on this issue has resulted in uncertainty among H-1B workers and employers about the appropriate actions needed for being in compliance with the law. DHS cannot account for all the H-1B worker entries, departures, and changes of visa status using its current tracking systems, because NIIS and CLAIMS 3 data are not integrated, and data for certain fields in each of these systems are not consistently collected and entered. As a result, DHS is not able to provide some key information needed to oversee the H-1B program and assess its effects on the U.S. workforce. This includes information on the number of H-1B workers in the United States at any time, the extent to which these workers become unemployed, the extent to which H-1B workers become long-term members of the labor force through other immigration statuses, and the occupations they fill as permanent members of the labor force. We found that obtaining better arrival and departure information on H-1B workers requires integration of change of status data from CLAIMS 3 with data from NIIS, and that such integration has proven to be challenging. Currently, if a foreign national enters the United States under a student visa and later becomes an H-1B worker, NIIS will not have a record that indicates this person is an H-1B worker, unless the person exits and re- enters the United States under the H-1B visa. In 2001, DHS officials attempted to obtain better information on the number of nonimmigrants in the United States and their current statuses by matching CLAIMS 3 and NIIS data using automated formulas, but found that about 60 percent of the records between these two systems still needed to be matched manually. This was mainly because the two systems do not have unique identifiers for matching records. While DHS is examining ways to improve its ability to match these records through formulas or by creating unique identifiers, arrival and departure data continue to be separated from change of status data. Although data integration could improve information on H-1B workers, DHS may continue to face challenges accounting for all departures because these data are not consistently collected. While NIIS is supposed to maintain departure records for H-1B workers, along with other nonimmigrants, data from fiscal years 1998 through 2000 indicate that departure information for foreign nationals is missing in about 20 percent of the cases. DHS cannot account for all H-1B worker departures because some nonimmigrants, especially those departing through land borders, do not submit departure forms when leaving the United States. The United States has an agreement with Canada that allows Canadian immigration officials to collect departure forms and submit them to DHS. However, Canadian officials are not required to collect these forms and, therefore, some nonimmigrant departures from the United States through Canada are not recorded. DHS also does not have immigration officials at some departure areas along the Mexican border, thereby relying on nonimmigrants to voluntarily deposit departure forms in collection boxes. DHS officials also told us that airlines do not consistently collect and/or return departure forms to DHS. In addition, some H-1B workers become permanent residents and, therefore, are no longer required to submit departure forms when exiting the country, leaving NIIS with no record of their departures from the United States. Moreover, DHS does not consistently enter change of status and occupation data into CLAIMS 3. As a result, it is not possible to determine either the number of H-1B workers who remained a part of the U.S. workforce by becoming permanent residents or other employment-related visa holders and the types of jobs they performed. About 50 percent of electronic records on permanent residents do not include data on residents’ prior visa status, according to a DHS official. Also, in fiscal years 2000 and 2001, about 20 to 25 percent of electronic records on permanent residents who were known to have been H-1B workers did not contain information on their occupations. In the data sets used to determine the number of nonimmigrants, such as H-1B workers, who changed to other employment-related visa statuses, the prior status data was missing in 30 percent of the cases. In addition, BCIS officials told us that occupation data for H-1B workers who changed to other employment-related visa statuses was often missing, but they were unable to tell us the extent to which this occurred. Although no formal studies have been conducted to determine why these data are missing, DHS officials believe that this is primarily due to contractors not entering prior visa status and occupation information into CLAIMS 3. One official said that some data contractors may not enter this information because CLAIMS 3 will accept records if the prior visa status and occupations fields are left blank. These data could also be missing because individuals without a prior status or occupation may leave these fields blank on their applications. These individuals, such as spouses of permanent residents, may be coming directly from a foreign country without having previously entered the United States under a nonimmigrant visa. DHS also maintains information in CLAIMS 3 that could indicate whether an H-1B worker is no longer employed and possibly no longer in H-1B status, but the agency has faced challenges with collecting this information. When H-1B workers become unemployed before their visas expire, employers are required to submit a letter to DHS stating that these workers are no longer employed with them. DHS uses this information to revoke the H-1B petitions, and this is indicated in CLAIMS 3. However, agency officials do not believe that all employers are submitting these letters, because DHS officials believe they have not received an equal number of subsequent employment petitions as notices that the H-1B worker is no longer with a former employer. Agency officials said that they are not able to better ensure the collection of these letters because they do not have the resources to proactively monitor employers. In addition, since the agency is not currently concerned about reaching the H-1B worker cap on petitions, a 6-month to a year lag time exists for entering data about revoked petitions. DHS recognizes the need for a more integrated system to track information on foreign nationals and is currently developing systems to meet this need. DHS is mandated to develop an information system that will integrate arrival and departure information on foreign nationals from databases within DHS and across other government agencies, such as the Department of State and law enforcement agencies. DHS is currently working with State to develop this system, known as US-VISIT, which is mandated by Congress to be fully implemented by December 2005. DHS plans call for US-VISIT to have the capability to generate a single comprehensive record of an individual’s entire immigration history, from the initial request to enter the United States (e.g., H-1B worker petitions) through departure and any re-entry. DHS’s plans also call for individual records in US-VISIT to be updated almost immediately as users of the different component databases update their records. For example, if a DHS official updates a nonimmigrant’s record to reflect that a person has changed visa status, that person’s US-VISIT record should reflect this change almost immediately. Moreover, DHS plans for US-VISIT to be able to generate statistical reports on nonimmigrants. As required by law, these reports will include the number of nonimmigrants, including H-1B workers, who have entered, exited, and remained in the United States. In addition to information systems issues, DHS’s ability to provide information on the status of the H-1B population is constrained because it has not issued consistent guidance or any regulations for implementing the visa portability provision of the American Competitiveness in the Twenty-First Century Act of 2000 (AC21). This has resulted in uncertainty about the extent to which unemployed H-1B workers can legally remain in the United States while seeking new jobs. Regulations have been in development for over 2 years, and interim guidance has not clarified this issue. For example, 1999 guidance stated that unemployed H-1B workers are out of status and should leave the United States or seek a change in status. However, in 2001, DHS issued guidance stating that AC21’s visa portability provisions appear to include unemployed individuals and that it expected to issue regulations addressing their status. Currently, BCIS officials are addressing this issue on a case-by-case basis,and decisions have been inconsistent, according to a few employers. These employers told us that in some cases, H-1B workers who were unemployed for more than 3 months were required to exit and re-enter the United States before beginning work with a new employer because they were considered out of legal status. Yet, overall, BCIS officials have not offered these employers clear directions about allowable timeframes for H-1B workers to be unemployed and remain in the country. This lack of clear guidance or any regulations can contribute to uncertainties in the circumstances facing these workers. Moreover, employers told us that this situation makes planning a worker’s starting date for a new job difficult. In addition, if employers pay for the cost of re-entry, this process can impose an unexpected cost of hiring an unemployed H-1B worker. The agency has been working to develop regulations related to visa portability since October 2000, but internal debates have prevented regulations from being issued sooner, according to a BCIS official. For example, the agency official told us that BCIS is concerned about immigration enforcement issues that may arise by allowing unemployed H- 1B workers to remain in the United States. Labor officials said that they were concerned about how unemployed H-1B workers in the United States might impact government programs for the unemployed if, for example, unemployed H-1B workers attempted to collect Unemployment Insurance. In addition, a U.S. labor representative said that another implication of allowing unemployed H-1B workers to remain in the United States is that they will be competing with unemployed U.S. workers for highly skilled positions. Much of the information policymakers need to effectively oversee the H- 1B program is not available because of limitations of DHS’s current tracking systems. Without this information, policymakers cannot determine whether this program is meeting the need for highly skilled temporary workers in the current economic climate and how to adjust policies that may affect workforce conditions over time, such as the H-1B visa cap, accordingly. Examples of needed information include the total number of H-1B workers in the United States at a given time and the numbers of H-1B workers employed in various occupations, the extent to which H-1B workers become long-term members of the labor force through permanent residency or other immigration statuses, and the occupations they fill as long-term members of the labor force. Such information could also assist policymakers in better determining program effects on workforce conditions such as wages and the proportion of jobs filled by H-1B workers. While DHS has long-term plans for providing better information on H-1B workers, policymakers in the interim need data to inform discussions of program changes. Employers also have expressed concern about how BCIS determines the legal status of unemployed H-1B workers. BCIS determines on a case-by- case basis whether an unemployed H-1B worker is allowed to stay in the United States while looking for another job. However, H-1B workers and employers are unsure about whether these workers can be hired for new positions without first having to exit and re-enter the country, which would be required if the workers’ legal immigration status was determined to have expired. While this issue is no doubt a concern for H-1B workers who have become unemployed, it is also a growing concern to employers who may wish to hire these workers. To provide better information on H-1B workers and their status changes, we recommend that the Secretary of DHS take actions to ensure that information on prior visa status and occupations for permanent residents and other employment-related visa holders is consistently entered into their current tracking systems, and that such information becomes integrated with entry and departure information when planned tracking systems are complete. In order to improve program management, we also recommend that the Secretary of DHS issue regulations that address the extent to which unemployed H-1B workers are allowed to remain in the United States while seeking other employment. We provided a draft of this report to DHS and Labor for their review. DHS concurred with our recommendations and acknowledged the need for an improved tracking system to link information related to H-1B nonimmigrants among the State Department, Labor, and DHS. DHS also said that it is in the planning stages to make changes to CLAIMS 3, which will ensure that information on prior visa status and occupations for permanent residents and other employment-related visa holders is consistently entered. In addition, DHS said that issuing regulations is a priority and that the final rule for implementing the law authorizing visa portability for H-1B workers is undergoing revisions based on intra-agency comments. DHS’s comments are reprinted in appendix III. Labor had no formal comments. DHS and Labor also provided technical comments that we incorporated as appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the Secretary of Homeland Security, the Secretary of Labor, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215. Other contacts and staff acknowledgments are listed in appendix IV. To obtain information on the occupations H-1B beneficiaries were approved to fill and demographic information and wage characteristics for H-1B beneficiaries and U.S. citizens, we examined the Bureau of Citizenship and Immigration Services’ (BCIS) 2000-2002 H-1B petition approval data for five key occupations: systems analysis and programming; electrical/electronic engineering; economics; accountants, auditors, and related occupations; and biological sciences. In addition, we examined 2000-2002 Current Population Survey (CPS) data on U.S. citizen employment in similar occupations. To obtain information on the occupations H-1B beneficiaries were approved to fill, we examined 2000-2002 H-1B petition approval data from BCIS’s Computer Linked Application Information Management System Local Area Network (CLAIMS 3 LAN). These data provided a variety of information on the H-1B beneficiaries in each year, such as the age, education level, and annual salary expected for each beneficiary at the time the petition was filed. However, neither the CLAIMS 3 LAN data nor BCIS itself can provide information on how many H-1B beneficiaries approved for employment in a year are actually working in the United States in any particular year. The CLAIMS 3 LAN data may be informative about H-1B petitions approved in a given year and about some characteristics of those beneficiaries. However, these characteristics may not be indicative of the characteristics of all H-1B workers in a given year. For example: Of the H-1B beneficiaries approved in 2001, we do not know the proportion that began work in 2001. Some may not have started work until 2002; others may not have started work at all. An individual H-1B worker could be represented in multiple petitions filed by different employers in the same year. We do not know the proportion of H-1B workers in 2001 who obtained their H-1B petition approvals in 2001, 2000, 1999, or 1998. Characteristics of H-1B beneficiaries approved in 2001 and working in 2001 may differ from characteristics of the H-1B workforce working in 2001 who received their approval in 1998-2000. For example, H-1B workers approved in 1998-2000 could, on average, be older in 2001 than those workers approved in 2001. Because of these uncertainties, we do not know how well the characteristics of the H-1B beneficiaries in any year would approximate the characteristics of the population of H-1B workers actually employed in that year. To obtain demographic information for U.S. citizens working in the five occupations we examined, we used the monthly CPS from 2002. The CPS is a monthly survey of about 50,000 households that is conducted by the Bureau of the Census for the Bureau of Labor Statistics (BLS). The CPS provides a comprehensive body of information on the employment and unemployment experience of the nation’s population. A more complete description of the survey, including sample design, estimation, and other methodology can be found in the CPS documentation prepared by Census and BLS. We used the 2002 CPS data to produce estimates of longest held job in the previous year, highest degree attained, citizenship, and age. We used the March 2002 Supplement of the Current Population Survey for all estimates of median wages of U.S. citizens working for private employers. This March Supplement (the Annual Demographic Supplement) is specifically designed to estimate family characteristics, including income from all sources and occupation and industry classification of the job held longest during the previous year. It is conducted during the month of March each year because it is believed that since March is the month before the deadline for filing federal income tax returns, respondents would be more likely to report income more accurately than at any other point during the year. Because the CPS is a probability sample based on random selections, the sample is only one of a large number of samples that might have been drawn. Since each sample could have provided different estimates, confidence in the precision of the particular sample’s results is expressed as a 95-percent confidence interval (e.g., plus or minus 4 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples that could have been drawn. As a result, we are 95-percent confident that each of the confidence intervals in this report will include the true values in the study population. We use the CPS general variance methodology to estimate this sampling error and report it as confidence intervals. Percentage estimates we produce from the CPS data have 95-percent confidence intervals of +/- 10 percentage points or less. Estimates other than percentages have 95-percent confidence intervals of no more than +/- 10 percent of the estimate itself. Consistent with the CPS documentation guidelines, we do not produce annual estimates from the monthly CPS data files for populations of less than 35,000, or estimates based on the March Supplement data for populations of less than 75,000. The blank cells in table 4 identify the estimates that we do not produce because they are for small populations. We compared CPS estimates of the number of U.S. citizen workers, age distribution, and highest degree attained to comparable categories of H-1B beneficiary approvals for the five occupation categories we examined. While we attempted to produce CPS estimates of U.S. citizens for a population that would be similar to H-1B workers, we could only make comparisons to H-1B beneficiaries with petitions approved in a particular year. In order to compare the H-1B beneficiary occupations to CPS U.S. workforce occupations, we combined some occupational categories in the CPS to better match those of the BCIS data, as shown in table 5. In order to verify our estimates of the numbers of U.S. citizens in the key occupations and their average annual salaries, we compared the March Supplement employment statistics for 2001 to those reported in the Occupational Employment Statistics (OES) 2001 survey. We did not use the OES for our analysis because the survey collects data from employers and does not provide information about individual workers, such as age and education. We compared the CPS median salary estimates for 2001 to median salary figures reported for the 2001 H-1B beneficiaries for several occupations, and for four age by education categories. For two of the occupations (biological/life scientists and economists), we did not produce CPS estimates due to insufficient data (see table 7). Although several of the comparisons we were able to make did show a statistically significant difference between the CLAIMS 3 H-1B beneficiary median salary and the “comparable” CPS estimate, it is difficult to interpret this result in terms of actual H-1B workers in 2001. There are several limitations that lead to uncertainty in the interpretation of these results: Although reporting problems are an issue with any measure of income, we have additional concerns about the validity of the H-1B beneficiary salaries, because the frequency distributions of the salaries of H-1B beneficiaries in the five key occupations showed that employers reported a number of very low and very high salaries for the “annual rate of pay” on the petition application. We had no basis for determining whether the high and low salaries were data entry errors, estimated payments for an employment period of more or less than a year, or were very high or low for some other reason. The measures of median annual salaries for U.S. citizens could include bonuses, but the median annual salaries listed on H-1B beneficiary petition approvals most likely do not. Neither median salary includes noncash benefits such as health insurance or pensions. CPS salary reported in March 2002 was for the longest held position actually worked in 2001, and reported by the worker himself (or a knowledgeable member of the household). In contrast, salaries reported in the CLAIMS 3 database for H-1B beneficiaries are provided by the employer requesting the petition approval in possibly 2000 or 2001 for an H-1B beneficiary likely beginning work in 2001 or 2002. The 2001 H-1B workforce includes not only a portion of those H-1B beneficiaries approved in 2001, but also those approved in prior years and beginning to work in the United States in 1999, 2000, or 2001. In 2001, the more experienced H-1B workers may have salary patterns that differ from new recipients in 2001. The definition of education level used to create our four age categories by education level cells is somewhat different for the H-1B beneficiaries as compared to the CPS U.S. workforce estimates. H-1B beneficiary status requires the attainment of a bachelor’s degree or higher (or its equivalent) in the field of specialty. In contrast, the education level recorded in the CPS is the highest degree attained – not necessarily related to any particular occupation. In light of these limitations, caution should be used in interpreting differences found in comparing CPS 2001 median salary estimates and 2001 H-1B beneficiary salaries. To obtain information about the factors affecting employer decisions about the employment of H-1B workers, we conducted site visits and telephone interviews with 36 H-1B employers in 6 of the 12 states with the largest number of H-1B petitioners—California, Maryland, New Jersey, New York, Texas, and Virginia—selected for their geographic dispersion. Employers were selected based on their number of H-1B petition approvals and occupations for which they requested H-1B workers in fiscal year 2000. Specifically, we selected a variety of large (100 or more H- 1B workers), medium (30-99 H-1B workers), and small (29 or fewer H-1B workers) employers to participate in the study. To obtain a range of occupations for which employers hired H-1B workers, we also selected employers based on whether they hired H-1B workers for either IT-related or non-IT-related positions, such as those in accounting or life sciences. We used fiscal year 2000 BCIS data to select employers because we wanted to capture any changes in H-1B worker staff since the economic downturn. Through interviews with these employers, we collected qualitative information on the factors affecting employers’ decisions in recruiting, hiring, and laying off both H-1B workers and U.S. citizen employees. Employer participation in this study was voluntary. We contacted 145 employers, and 25 percent, or 36, of these employers chose to participate; consequently, our results may be biased by this self-selection. In order to provide a broader perspective, we interviewed associations representing highly skilled workers and associations representing employers to obtain their views on how employers make decisions about their U.S. and H-1B workers. We also interviewed Labor WHD officials about the agency’s enforcement authority and employer violations of the H-1B program requirements. To obtain information available on H-1B workers’ entries, departures, and changes in visa status, we examined DHS data from current tracking systems. However, we determined that these data had limitations that precluded them from meeting our reliability standards. As a result, we did not include them in our report. For example, we obtained data from DHS on the total arrivals and departures of H-1B workers for fiscal year 2000 and the number of permanent residents who reported previously being H- 1B workers immediately before changing status in fiscal years 2000 and 2001. According to DHS officials, these were the most recent automated data available. We also obtained data on the number of H-1B workers who changed from H-1B to other employment-related visa statuses from January 1, 2000 to December 31, 2002. In addition, we spoke with DHS officials about the limitations of these data, data on the occupations of employment-related visa holders, and current tracking systems. We also obtained and reviewed reports on DHS’s planned tracking systems. Among the documents we reviewed were the concept of operations for US-VISIT (formerly known as the entry/exit system), a report on system requirements for US-VISIT, the Data Management and Improvement Act Task Force’s first annual report, and a report on the case management system that is planned to replace CLAIMS 3. We also interviewed DHS officials who are developing the new systems to learn more about the planned system capabilities. Tables 6 and 7 provide information on the age distribution and salaries of H-1B beneficiaries and U.S. citizen workers. In addition to the above contacts, Danielle Giese and Emily Leventhal made significant contributions to this report. Also, Shana Wallace assisted in the study design and analysis; Mark Ramage assisted in the statistical analysis; Julian Klazkin provided legal support; and Patrick DiBattista assisted in the message and report development. Information Technology: Homeland Security Needs to Improve Entry Exit System Expenditure Planning. GAO-03-563. Washington, D.C.: June 9, 2003. High-Skill Training: Grants from H-1B Visa Fees Meet Specific Workforce Needs, but at Varying Skill Levels. GAO-02-881. Washington D.C.: September 20, 2002. Immigration Benefits: Several Factors Impede Timeliness of Applications Processing. GAO-01-488. Washington, D.C.: May 4, 2001. H-1B Foreign Workers: Better Controls Needed to Help Employers and Protect Workers. GAO/HEHS-00-157. Washington, D.C.: September 7, 2000. Immigration and the Labor Market: Nonimmigrant Alien Workers in the United States. GAO/PEMD-92-17. Washington, D.C.: April 28, 1992.
The continuing use of H-1B visas, which allow employers to fill specialty occupations with highly skilled foreign workers, has been a contentious issue between U.S. workers and employers during the recent economic downturn. The H- 1B program is of particular concern to these groups because employment has substantially decreased within information technology occupations, for which employers often requested H-1B workers. In light of these concerns, GAO sought to determine (1) what major occupational categories H- 1B beneficiaries were approved to fill and what is known about H-1B petition approvals and U.S. citizen employment from 2000-2002; (2) what factors affect employers' decisions about the employment of H-1B workers and U.S. workers; and (3) what is known about H-1B workers' entries, departures, and changes in visa status. H-1B beneficiaries were approved to fill a variety of positions in 2002, and the number of approved petitions (i.e., employer requests to hire H-1B beneficiaries) in certain occupations has generally declined along with the economic downturn, as have U.S. citizen employment levels in these occupations. In contrast with 2000, most H-1B beneficiaries in 2002 were approved to fill positions in fields not directly related to information technology, such as economics, accounting, and biology. Both the number of H-1B petition approvals and U.S. citizens employed in certain occupations, such as systems analysts and electrical engineers, decreased from 2001 to 2002. GAO contacted 145 H-1B employers, and the majority of the 36 employers that agreed to speak with GAO said that they recruited, hired, and retained workers based on the skills needed, rather than the applicant's citizenship or visa status. Despite increases in unemployment, most employers said that finding workers with the skills needed in certain science-related occupations remains difficult. Although some employers acknowledged that H-1B workers might work for lower wages than their U.S. counterparts, the extent to which wage is a factor in employment decisions is unknown. The Department of Homeland Security (DHS) has incomplete information on H-1B worker entries, departures, and changes in visa status. As a result, DHS is not able to provide key information needed to oversee the H-1B program and its effects on the U.S. workforce, including data on the number of H-1B workers in the United States at any time. GAO also found that DHS's ability to provide information on H-1B workers is limited because it has not issued consistent guidance or any regulations on the legal status of unemployed H- 1B workers seeking new jobs. Allowing unemployed H-1B workers to remain in the United States may have implications for the labor force competition faced by U.S. workers. While DHS has long-term plans for providing better information on H-1B workers, policymakers in the interim need data to inform discussions on program changes.
In essence, evaluating and managing the risk from exposure to pesticides involve determining the maximum safe level of exposure to a pesticide and assessing whether expected actual exposure is below this maximum level.Figure 1 shows how these two steps relate to each other. As long as the expected exposure remains lower than the maximum safe exposure, the risk created by use of the particular pesticide is within acceptable limits and usually no action is required. However, once expected actual exposure levels exceed the maximum safe amount, EPA must determine the best ways to reduce exposure below the safe level to mitigate the risk. Many possible risk mitigation actions may be applied, ranging from prohibiting an agricultural or residential use of the pesticide to changing directions for its use (such as spraying less often). These mitigation steps are intended to reduce overall exposure from all sources, including exposure through pesticide residues on foods. FQPA made several fundamental changes in how EPA assesses and manages pesticide exposure risks to humans.Under FQPA, EPA must reevaluate existing tolerances for pesticide residues in foods within 10 years. In doing so, EPA is required to (1) apply an additional 10-fold safety factor in setting tolerances to ensure the safety of foods for children, unless reliable data support a different factor; (2) ensure that there is reasonable certainty that no harm will result to children from aggregate exposure to a pesticide from food, drinking water, and residential sources; and (3) consider available information concerning the cumulative effects on children of pesticides that act in a similar harmful way (known as a common mechanism of toxicity). These FQPA requirements also apply in setting tolerances for new pesticides that are being registered and for new uses of existing pesticides. In reassessing existing tolerances, EPA must give priority to pesticides that appear to pose the greatest risk to public health. Table 1 provides a brief overview of these requirements. Among the difficulties EPA has faced in implementing FQPA requirements is the fact that the scientific knowledge necessary to accomplish some of these new mandates did not exist in 1996. EPA’s pesticide regulatory process has traditionally focused on exposures from food and considered each pesticide separately. Under FQPA the agency has been required to develop the methods and data to perform the new aggregate exposure and cumulative effects assessments, which incorporate exposures from nonfood sources and the combined effects of entire classes of pesticides that were previously considered individually. The first class of pesticides to be affected by cumulative assessment under FQPA will likely be the organophosphates. EPA has identified the organophosphates as a class of pesticides requiring cumulative assessment because they can impair nervous system function by inhibiting the enzyme cholinesterase. The organophosphates are older pesticides, and EPA considers some of them to be more hazardous (although not all older pesticides are necessarily more hazardous). They are of special concern because of their toxicity and widespread use both in agriculture and in homes and gardens, according to the National Research Council’s 1993 report, PesticidesintheDietsofInfantsandChildren.One of them is the single most widely used household pesticide in the United States— chlorpyrifos, which is sold under such names as Dursban and Lorsban. EPA’s Office of Pesticide Programs (OPP) has the lead responsibility for implementing the new FQPA requirements within its existing system of pesticide regulation. This system includes registering or licensing new pesticide products for use in the United States and reevaluating older pesticides to ensure that they meet current health standards and that their risks are adequately mitigated (a process required to reregister the pesticide for continued use).Nearly 900 people organized into nine OPP divisions carry out these activities. Two divisions have the main responsibility for managing pesticide risk assessments: the Registration Division (for assessing new chemicals and new uses of existing chemicals) and the Special Review and Reregistration Division (for assessing most conventional chemical pesticides for reregistration and for reassessing tolerances as required by FQPA). To help conduct these risk assessments, the two divisions use analyses provided primarily by another division, the Health Effects Division. Scientists in the division examine the substantial body of studies and data reports that under regulation are required to be submitted by the pesticide’s registrant (that is, the applicant for registration, usually the manufacturer), along with other available data, to ensure the reliability of the studies, assess the toxicity of the pesticide under review, and estimate the risks of exposure. Sources of possible exposure that are considered include water and residential contamination, in addition to the traditional focus on food exposures. The risk assessments are subject to internal peer review by the Health Effects Division staff. Soon after FQPA became law in 1996, EPA began to include consideration of the additional safety factor for children in its pesticide risk assessments, as required. EPA developed interim guidelines for determining whether this additional safety factor should be applied, and these procedures have evolved over time.Under this approach, an internal review committee of scientists, managers, and other experts within OPP—the FQPA Safety Factor Committee—takes lead responsibility for recommending whether the additional safety factor should be applied, with OPP management making the final decisions. By March 2000, this committee had reviewed and prepared safety factor recommendations for 150 pesticides, and OPP management had made final safety factor decisions for 105 of them. EPA began to consider the additional safety factor in its pesticide reviews and tolerance reassessments soon after FQPA was passed. By October 1996, EPA had drafted an initial version of its approach to applying the additional safety factor, and by January 1997 it had issued a notice providing detailed guidelines for manufacturers on how pesticide reviews for registration and reregistration would proceed, taking the new FQPA requirements into account.In March 1997, EPA published an implementation plan for FQPA that addressed how it would consider the additional safety factor for children. The implementation plan called for applying the following approach: EPA would require the additional 10-fold safety factor for children if the agency lacked complete and reliable data to assess pre- or postnatal toxicity relating to infants and children or if the data indicated pre- or postnatal effects of concern. If data were incomplete, an additional safety factor between 3 and 10 would be applied, with the size of the factor depending on how much information was incomplete and the seriousness of any concerns about effects. If data were sufficient to demonstrate no potential pre- or postnatal effects of concern, no additional safety factor would be applied. To make recommendations to OPP management about applying the safety factor in individual pesticide risk assessments, OPP established a Safety Factor Committee in February 1998.This internal peer review group is composed of risk assessors (including toxicologists and exposure experts) from OPP science divisions and risk managers (staff responsible for risk mitigation activities) from the divisions that regulate most of the chemicals. The committee’s procedures call for systematically reviewing both toxicology and exposure data for each chemical, focusing on two overriding concerns: (1) uncertainties in the data used for the toxicology and exposure assessments (data gaps) and (2) evidence of increased susceptibilities in infants and children (the potential for pre- and postnatal toxicity).Examples of the types of questions considered in these subject areas are presented in table 2. Committee members are encouraged to apply scientific judgment as well as qualitative and quantitative data in reaching consensus on whether to apply, reduce, or remove the safety factor. The committee considers written reports and oral presentations and seeks to reach a consensus in each case on the FQPA safety factor it will recommend to OPP managers. As of March 2000 the committee had reviewed 150 pesticides and submitted safety factor recommendations to OPP managers. In reviewing the committee’s justifications for its recommendations, we found that when the committee identified both toxicology data gaps and evidence of increased susceptibility in children, the pesticides were most likely to receive a recommendation for a 10-fold safety factor. When there was no evidence of increased susceptibility, but incomplete data, a safety factor was also recommended, but it was less than 10 when the data suggested that a lower safety factor was sufficient. Pesticides with neither increased susceptibility nor data gaps usually received a recommendation for no additional safety factor. While EPA has incorporated consideration of the new safety factor in its pesticide reviews, it has continued efforts to refine its policies on applying the safety factor. A formal policy document on the safety factor has been developed (it is not a regulation), which discusses in detail the legal framework, overall approach, and related toxicology and exposure issues. It is much more extensive than the guidelines under which the Safety Factor Committee has been operating, but is consistent with them. An EPA official explained that the two documents serve somewhat different purposes, with the policy document providing comprehensive discussion of the issues and the operating procedures translating those policies into practical guidelines. The safety factor policy document was released for public comment in 1999. As of July 2000, an agency official told us that EPA was still assessing the comments it had received, and the document had not yet been issued in final form. OPP senior managers make the final decisions about whether to apply the additional safety factor for children, based on the Safety Factor Committee’s recommendations and other considerations. As of the end of March 2000, OPP had made safety factor decisions for 105 of the 150 pesticides the Safety Factor Committee had reviewed. OPP determined that a safety factor to protect children, in addition to the routinely applied 100-fold safety factor, was necessary in 49 cases and that available evidence was sufficient to show that an additional safety factor was not required in 56 cases (see table 3). For the organophosphate pesticides, OPP decided to apply the additional safety factor in 24 cases and not to apply it in 15. In most cases, OPP managers adopted the committee’s recommendations for the level of safety factor to protect children, but in some cases they increased those levels. In 19 of the 105 decisions, the factor was increased (made more protective) to account for other types of uncertainties. OPP officials said these uncertainties most often related to serious data gaps or special concerns about the severity of a pesticide’s health effects. In 5 cases, OPP increased the combined safety and uncertainty factors to greater than 10-fold. To provide an indication of whether EPA was following its procedures, we selected three high-risk pesticides of different types (including one organophosphate) that had gone through the safety factor review process and asked a consultant with expertise in environmental toxicology to review the process and results. These three examples are described in appendix II. The consultant concluded that in all three cases EPA’s actions were thorough and its conclusions reasonable. EPA has interim procedures in place for considering aggregate exposure in its pesticide reviews and tolerance reassessments. These procedures incorporate available data on exposures from drinking water and residential uses, along with food exposures. Efforts are being made to improve available data on nonfood exposures and the methods for estimating combined exposures to individual pesticides from all sources. Efforts to consider the cumulative effects of exposure to groups of similar pesticides have not progressed as far as those for aggregate exposure. EPA has adopted policies for identifying classes of pesticides that have a common mechanism of toxicity, but methods for conducting cumulative assessments for these classes of pesticides are still under development. As a result, EPA has not yet considered cumulative effects in its pesticide risk assessments. In the case of the organophosphates—and chlorpyrifos in particular—the potential effects of aggregate exposure and cumulative assessments, in terms of needed mitigation steps, are beginning to emerge. Although formal policy guidance for performing aggregate exposure assessments has not yet been issued in final form, EPA has interim procedures in place for considering aggregate exposure using available data and methods.Traditionally, EPA has assessed the risk of food use pesticides on the basis of estimated exposure from all foods containing residues of the pesticide. Under FQPA, EPA must also take into account the amount of exposure to each pesticide that is likely to occur from drinking water and from uses in and around the home. Common residential uses include lawn and gardening uses, pet applications, and roach and termite treatments. Not all pesticides have residential uses, but for those that do, adding those types of exposures to food and water exposures might push the total beyond the maximum safe level of exposure, leading to a need for mitigation steps and possible changes in tolerances. Because of its traditional focus on pesticide exposures from foods, EPA’s data and methods for estimating food exposures are relatively highly developed, but for most pesticides the agency has lacked the data and methods to estimate nonfood exposures from drinking water and residential uses. Moreover, EPA has lacked a method for combining exposures from these sources to estimate aggregate exposure. While such data and methods are being developed, the agency is using an interim approach that relies on available data and conservative scientific judgments to protect health; that is, in most cases the high-end estimates for drinking water and residential exposures are added to the estimated food exposure. Estimating exposures from pesticides used in and around the home has been a particular challenge for EPA. In a working paper on assessing these residential exposures, EPA stated that it relies primarily on the scientific literature and industry sources because it lacks data for most pesticides to characterize exposures from nonfood sources. Many types of needed data still are not available. For example, an official told us that results from an effort initiated in 1995 to collect data on outdoor residential exposures (mainly from lawn chemicals) are only now coming in, and other efforts to collect data on indoor residential exposures and commercial pesticide applications are still under way. Methods for using such data to estimate residential exposures are being developed, which, when approved by EPA’s Scientific Advisory Panel, will apply to both aggregate exposure and cumulative assessments. EPA intends that ongoing development and refinement will follow, as the agency gains experience with the methods. Developing ways to assess both aggregate exposure and cumulative effects has been more difficult and time-consuming than EPA anticipated, but developing approaches to cumulative effects assessment has proved particularly difficult. Experts in toxicology, exposure assessment, and risk assessment methodologies have indicated that the science necessary to successfully factor in these types of exposures, especially for cumulative effects, is a work in progress. Beginning in 1997, EPA contracted with the International Life Sciences Institute (ILSI)to convene workshops that EPA hoped would bring together representatives of industry and academia and other interested parties to participate in developing the new exposure assessment policies required under FQPA. An ILSI group worked on aggregate exposure assessment methods in 1997 and 1998, and another workshop reported on a framework for cumulative risk assessment in 1999. Because of the complexity of the scientific issues involved, EPA has included considerable review by experts both inside EPA (including staff and advisory committees) and in the academic and research communities in its development of ways to measure aggregate exposure and cumulative effects. According to one EPA official, while this peer review likely provided benefits, it also slowed the process. The review has come from such groups as EPA’s Scientific Advisory Panel and the Tolerance Reassessment Advisory Committee, which also have provided review of policies related to other aspects of implementing FQPA.The Scientific Advisory Panel, which has been the main source of ongoing peer review, now meets about every 2 months for 4 to 5 days, the official told us. All panel meetings are open to the public, industry, and environmental groups. Obtaining review from the Tolerance Reassessment Advisory Committee required substantial time and resources, according to the official, because many background and policy documents had to be prepared. To put cumulative assessment in place, EPA first needed methods to identify groups of pesticides that act on the body in similar ways to cause adverse health effects. A January 1999 document laid out the principles EPA applies to determine if a group of pesticides acts through a common mechanism of toxicity. Using these principles, EPA has identified the organophosphates as one such group of pesticides because they impairnervous system function by inhibiting the enzyme cholinesterase.The next step, developing the methods for actually conducting cumulative assessments, has been more difficult and time-consuming, and the first draft of the methodology was not released for public comment until June 2000. EPA has not yet conducted a cumulative assessment. In addition to the need for an acceptable methodology, a cumulative assessment requires aggregate exposure assessments for each of the individual pesticides in the cumulative effects group. While aggregate exposure assessments are in process for the 39 organophosphates, they are not all complete. Nonetheless, EPA agency staff expect to present a pilot test of the proposed cumulative assessment methodology to the Scientific Advisory Panel in September 2000, using a case study of 25 organophosphate pesticides (including 7 chemicals with residential uses and 2 with water residues). The pesticides will not be named, to encourage focus on the assessment process, but the data will be real and fairly complete. EPA currently lacks the methods to consider cumulative effects associated with classes of pesticides that have a common mechanism of toxicity, such as the organophosphates, but the potential effects of aggregate exposure assessment can be seen in the example of chlorpyrifos (sold under such names as Dursban and Lorsban), a major organophosphate pesticide that has many food and residential uses. Chlorpyrifos is found in many insect sprays and is the single most widely used household pesticide in the United States. It is also used by many commercial growers. In this instance, EPA has applied an additional 10-fold safety factor to protect children and has assembled considerable data about aggregate risk from the many sources of possible exposure to chlorpyrifos. At a technical briefing on June 8, 2000, EPA announced agreement with the pesticide’s manufacturer to eliminate all home, lawn, and garden uses of the pesticide, to eliminate the majority of termite control uses, and to significantly lower allowable pesticide residues on several foods regularly eaten by children, such as apples, grapes, and tomatoes. These mitigation steps are intended to reduce expected aggregate exposure below the maximum safe level. Whether actions similar to those for chlorpyrifos will result from considering aggregate exposures to other less widely used organophosphate pesticides is unknown. However, when EPA conducts a cumulative assessment, combining aggregate exposures for all the pesticides in this group, additional mitigation steps may be necessary to protect children’s health. EPA has made some progress in reassessing existing tolerances, as required by FQPA, but relatively few of these allowable limits for pesticide residues have changed as a result of considering the law’s new requirements. As of April 2000, EPA reported that it had reassessed nearly 3,500 tolerances for about 300 pesticides.However, nearly half of these tolerance reassessments did not require consideration of the additional safety factor for children or aggregate exposure, because the manufacturer agreed with EPA to voluntarily eliminate the tolerances and withdraw the pesticides from those uses. Most of the other reassessed tolerances were unchanged. Although EPA has given priority to reassessing tolerances for high-risk pesticides, reassessments for the high-risk organophosphate pesticides cannot be completed until a cumulative assessment has been done for the group. FQPA requires EPA to reassess all food use tolerances that were in effect prior to passage of the law in August 1996—9,721 tolerances—to ensure that the maximum residue levels they allow reflect any changes that might result from the act’s new protections. EPA must complete these tolerance reassessments within 10 years of FQPA’s enactment on a specific schedule: one-third by August 1999, two-thirds by August 2002, and the rest by August 2006. According to EPA, 3,290 tolerances (34 percent of the total) had been reassessed by August 1999, the date of its first deadline. EPA also announced that 2,178 (66 percent) of those reassessed tolerances were for pesticides in the highest risk group. By April 2000, when we conducted our analysis, the number of tolerance reassessments stood at 3,471, or 36 percent of the total. Our analysis of the 3,471 tolerances that EPA counted as reassessed in April 2000 showed that nearly half of them—1,638 tolerances, or 47 percent—did not involve consideration of the new FQPA requirements. Most of these (1,257) were eliminated or canceled by EPA, with the manufacturer’s agreement, before risk assessments for the associated pesticides were completed.Tolerance reassessments considered to be voluntary removals or cancellations generally fell into two categories: Tolerance no longer needed. When a particular use (for example, on apples) has been removed from the list of registered uses for a pesticide, a tolerance is no longer needed for that use. There were cases in which tolerances for previously removed uses had not yet been canceled, and if they were not needed for imported foods, EPA completed the cancellation process. Manufacturer withdraws support. Manufacturers may withdraw support for certain tolerances for a variety of reasons. For example, they may determine that the costs of continued registration of the pesticide for that use—including the costs of additional testing and registration fees—are not justified by market conditions. An EPA official told us that in a number of these cases, risk concerns that the agency expressed about the associated pesticide contributed to the manufacturer’s decision to drop the tolerance. Fifty-three percent of the tolerance reassessments (1,833) were based on pesticide risk assessments that considered aggregate exposure and the additional safety factor for children. Most of these tolerance reassessments—1,421 tolerances, or 77.5 percent—resulted in no change (see table 4). The remainder of the tolerances were revoked (eliminating the use), lowered (allowing less residue), or raised (allowing more residue). EPA officials indicated that only a small percentage of tolerances were lowered, even after the additional safety factor and aggregate exposures were considered, because historically the agency has set tolerances conservatively. As a result, they said, many tolerances were already at levels that would pass FQPA’s more stringent requirements. Likewise, EPA officials told us that their decisions to raise 175 tolerances (that is, to allow an increased concentration of pesticide residue to remain on the food) do not represent an unacceptable risk to children or the general population. Instead, the raised tolerances reflect new data from additional studies or field trials that allowed EPA to perform more refined analyses of pesticide exposure and risk. FQPA required EPA to give priority to reassessing tolerances for high-risk chemicals, and in August 1997 the agency published a FederalRegister noticethat divided the pesticides with tolerances requiring reassessment into three priority groups by level of risk. The highest priority group, Group 1, which EPA considers to be the highest risk, included the organophosphates, probable cancer-causing chemicals, and other pesticides of particular concern. This group accounts for the largest proportion, about 57 percent, of all tolerances that need to be reassessed. Of the 3,471 tolerances that EPA has counted as reassessed through April 2000, two-thirds (2,286, or 66 percent) were for pesticides in Group 1. This represents reassessment of 41 percent of all tolerances for the high-risk pesticides. Less than 30 percent (483 of 1,691) of tolerances for the high- risk organophosphate pesticides were counted as reassessed, and most of these were canceled voluntarily. Even though safety factor decisions have been made for 39 organophosphates and risk assessments including aggregate exposure are in process, EPA has been unable to finalize the pesticide risk assessments and their associated tolerance reassessments, because the individual reviews must be combined in a cumulative assessment for all of the organophosphates. FQPA brought substantial changes to EPA’s pesticide regulatory process, and these changes are still works in progress. Some of the tools needed by EPA to implement FQPA were not available when the law was enacted. EPA set about developing the necessary procedures, methodologies, and data almost immediately. The agency has adopted a series of interim approaches while specifying, with the assistance of peer reviewers, more refined permanent methods, which are now nearing completion. EPA has made progress in reviewing pesticides and reassessing tolerances since 1996, but so far relatively few tolerances have changed as a result of considering the new FQPA requirements. While it is too early to tell what the future effects of FQPA may be, the next few years could bring substantial changes, as the organophosphates and other groups of high-risk pesticides are reconsidered in the light of their aggregate exposures and cumulative effects. It appears that EPA’s recent decision on chlorpyrifos, for example, will result in major changes in the uses of that pesticide that are intended to protect people, and children in particular, from potentially adverse health effects. We provided a draft of this report for comment to EPA, which supplied technical comments that we incorporated where appropriate. We will send copies of this report to the Honorable Carol Browner, EPA Administrator, appropriate congressional committees, and other interested parties. We will also make copies available to others on request. If you or your staffs have any questions about this report, please call me at (202) 512-7119. Major contributors to this report are listed in appendix III. To examine how EPA is making decisions about applying the new safety factor for children, we obtained documentation for each FQPA safety factor determination. This consisted of three documents: (1) a summary log of recommendations made by the Safety Factor Committee as of March 21, 2000, with justifications for these recommendations, (2) a list of final decisions made by OPP managers for regulatory purposes, and (3) a “Safety Factor Report” dated March 22, 2000, which explains the differences between the Safety Factor Committee recommendations and the final safety factor decisions. We synthesized information from these lists with other documentary evidence obtained from EPA, as well as information from EPA’s Tolerance Reassessment Tracking System (discussed below), to summarize the results of FQPA safety factor decisions made to date. In addition, we obtained detailed documentation and support material for three pesticides that were reviewed by the Safety Factor Committee and asked a consultant with expertise in environmental toxicology, H.B. Matthews, Ph.D., Society of Toxicology Congressional Fellow, to review those cases in depth. We did this to provide specific examples of EPA processes and their results and to determine whether EPA’s processes for assigning FQPA safety factors seemed reasonable. To determine what progress EPA has made in considering aggregate exposure and cumulative effects, we obtained documents showing the development of policies and procedures, including numerous interim drafts, and interviewed EPA officials regarding the history behind the development of these policies. We also held numerous discussions with EPA officials to determine the extent to which EPA has assessed aggregate exposure and cumulative effects in the pesticides reviewed since passage of FQPA, and the schedule for the completion and implementation of the policies. Information on EPA’s plans, schedule, and progress in reassessing the group of organophosphate pesticides is available on the OPP Web site at www.epa.gov/pesticides/op/status.htm. Finally, to identify what progress has been made in reassessing tolerances, we obtained EPA’s Tolerance Reassessment Tracking System database current to April 11, 2000. EPA created this database to track the agency’s progress in meeting the deadlines associated with FQPA’s requirement to reassess all tolerances. The tracking system contains extensive information on all permanent pesticide tolerances registered as of August 1996, as well as data on each pesticide associated with the tolerances. In order to examine the agency’s use of the FQPA safety factor in assessing risk for each pesticide, we added to the database a field containing the specific safety factor decisions for each chemical. Because most pesticides in the tracking system have more than one tolerance, we associated each pesticide’s safety factor with all tolerances for that chemical. We used the tracking system to identify groups of tolerances and pesticides with specific attributes by creating a series of database filters. For example, by selecting for tolerances EPA counted as reassessed, which had an FQPA safety factor decision, but were not reassessed administratively through notices in the FederalRegister,we identified those tolerances that were reassessed as the result of a complete pesticide risk assessment, including an FQPA safety factor decision and consideration of aggregate exposures. Similarly, we used various criteria to determine other attributes of the tolerances EPA has counted as reassessed, such as pesticide type (organophosphate, carbamate, organochlorine, and so on), risk priority group (Group 1, 2, or 3), and the resulting tolerance reassessment actions (raise, lower, same, or revoke). We asked a consultant with expertise in environmental toxicology, H.B. Matthews, Ph.D., Society of Toxicology Congressional Fellow, to help us review three pesticide cases to provide detailed examples of how OPP considers the new FQPA requirements in its pesticide risk assessments. In these examples, we focused primarily on the work of the FQPA Safety Factor Committee, but we considered other aspects of OPP’s review and decision-making process as well. We selected three pesticides for review, using the following criteria: the pesticides selected must have received a final safety factor decision; must have tolerances to be reassessed under FQPA; and must be high-risk Group 1 pesticidesof different types, including an organophosphate; preferably should have multiple uses (tolerances), including nonfood residential uses requiring aggregate exposure assessment; and must be likely to affect children through food and other exposures. The pesticides we selected were dicofol, an organochlorine; methomyl, a carbamate; and phosmet, an organophosphate. We obtained documents to describe the OPP review process for these three pesticides from the OPP officials who manage the FQPA Safety Factor Committee. We provided these documents to our consultant for his review, which focused on such basic questions as the following: As an overall conclusion, based on the input reports and committee deliberations, does the Safety Factor Committee’s recommended safety factor appear to be reasonable and reasonably well justified? Did the committee follow its own review criteria in a systematic way? Did the committee adequately justify its decisions on (1) data completeness and reliability (data gaps) and (2) evidence of increased susceptibility in children? How did the committee consider aggregate exposures? Was cumulative assessment addressed in any way? Our expert reviewer prepared comments addressing these questions for each of the pesticide examples, and in some cases provided additional information and opinion based on his own knowledge and experience. Those comments have been combined with a description of the pesticide and a summary of its review history in the sections below. Dicofol is an organochlorine pesticide in EPA’s high-risk Group 1. It is used primarily on cotton, apples, and citrus crops and has nonresidential uses on lawns and ornamental shrubs (for example, it may be used by professional applicators on golf courses and landscaping, but it may not be used by homeowners). Dicofol was first registered as a pesticide in the United States in 1957. There are 50 current food use tolerances registered for dicofol, all of which have been reassessed under FQPA requirements. Dicofol was reviewed at one of the first meetings of the FQPA Safety Factor Committee, on March 30, 1998. Information was excerpted from the pesticide’s reregistration document, which was nearly “Toxicological Considerations for FQPA Safety Factor Selection.” The reregistration document’s “FQPA considerations” section presented the following conclusions: (1) the data provided no indication of increased sensitivity in young animals, but (2) a developmental neurotoxicity study was required, but not available (a data gap), because dicofol is an organochlorine, is structurally related to DDT (which is neurotoxic), and is considered an endocrine disruptor. A safety factor of 3 was recommended. The main concern in terms of exposure was for occupational users. However, at that time there were two homeowner uses, and no data were available to assess residential exposure. Because of this lack of data, the reregistration document recommended that residential use of dicofol be discontinued. The document stated that EPA did not have the methods or the data to consider potential cumulative effects from dicofol and other members of the organochlorine class of pesticides. The Safety Factor Committee reviewed the information from the reregistration document for dicofol in March 1998 but was unable to reach a consensus decision because it was concerned that its recommendation could set a precedent for other endocrine disruptors. After seeking guidance from OPP division directors, the committee recommended a 3- fold safety factor. OPP managers accepted this recommendation, and the revised (July 2, 1998) and final (November 1998) dicofol reregistration documents issued by EPA included the 3-fold FQPA safety factor. Reasons given were as follows: (1) aggregate exposure concerns were reduced because the two residential uses were dropped, (2) cumulative effects from dicofol and other organochlorines could not be considered, (3) strong concerns regarding occupational exposure remained, and (4) significant risk mitigation actions were required, including voluntary cancellation of some uses by the registrant. Eligibility for reregistration was contingent on the results of a dermal toxicity study and a dislodgeable foliar residue study (to be submitted). Our reviewer noted that dicofol is an old pesticide, which is only moderately toxic and not extremely persistent in the environment, with no remaining residential uses. It raises concerns because of its structural relation to DDT. Having considered the documents described above, he reported that EPA’s review of dicofol was thorough, although this was one of the first pesticides the Safety Factor Committee reviewed and its procedures were not as explicit as they later became. In the reviewer’s judgment, data for dicofol appeared to be complete, with the only identified data gap being the need for a developmental neurotoxicity study; and the decision that there was no evidence of increased susceptibility in children was well justified. He concluded that EPA and the Safety Factor Committee responded appropriately to FQPA requirements to consider aggregate exposures for dicofol. EPA was not prepared to consider cumulative effects at that time. The reviewer noted that because pesticides related to dicofol have been removed from the market, cumulative effects from dicofol and other organochlorines would be effectively limited. Methomyl is a carbamate insecticide, also in EPA’s high-risk Group 1. It has a wide variety of registered uses on field, vegetable, and orchard crops, turf farms, livestock quarters, and commercial premises and refuse containers. It was first registered in the United States in 1968. There are 80 current food use tolerances for methomyl but no homeowner residential uses. Ornamental and greenhouse uses were canceled voluntarily during the course of the pesticide’s review in 1998. Tolerance reassessment under FQPA has been completed. Methomyl was reviewed by the Safety Factor Committee on April 6, 1998. The committee received a report from the internal toxicology review committee (known as the Hazard Identification Assessment Review Committee), along with a document known as the “FQPA Responses,” which was prepared to address the committee’s specific questions in its standard operating procedures. The toxicology review committee’s report reviewed the toxicology database, including a new study submitted by the registrant (21-day dermal toxicity in rabbits). Data gaps were identified: acute and subchronic neurotoxicity studies were required but not available. Consequently, data on cholinesterase inhibition, behavioral effects, and nervous system effects by the pesticide were missing. The neurotoxicity studies were required because methomyl is a carbamate (a class of pesticides with known neurotoxic effects) and neurotoxic effects from methomyl were seen in dogs and rabbits. The requirement for a developmental neurotoxicity study was noted as “reserved,” pending the results of the acute and subchronic studies. The Safety Factor Committee decided to recommend a 3-fold safety factor for methomyl, based on (1) no indication of increased susceptibility in children and (2) data gaps—specifically, the lack of acute and subchronic neurotoxicity studies. The committee reviewed exposure data for food and drinking water, but not for residential exposures because there were no residential uses. Data quality was considered generally high, and realistic assumptions (conservative models) were used. The final reregistration document for methomyl, dated December 1998, reflected the 3-fold FQPA safety factor decision. Because there were no homeowner uses, the aggregate exposure assessment did not consider residential exposures. However, because methomyl is produced when another pesticide, thiodicarb, degrades, the aggregate risk assessment considered methomyl residues from applications of both methomyl and thiodicarb. The document also states: “The Agency does not have, at this time, available data to determine whether methomyl has a common mechanism of toxicity with other substances or how to include this pesticide in a cumulative risk assessment. For the purposes of this assessment, therefore, the Agency has not assumed that methomyl has a common mechanism of toxicity with other substances.” On the basis of his review of the EPA documents, our reviewer felt that a 3- fold safety factor was appropriately conservative for methomyl. Input data were provided to address the Safety Factor Committee’s review questions, and the committee’s report indicated a thorough and careful review. That report provided justification for reducing the 10-fold safety factor to 3-fold by referring to sections of the source reports. Regarding the data gap for acute and subchronic neurotoxicity studies in rats that was the reason for the 3-fold safety factor, the reviewer noted that the gap did not seem to be a pressing need, because similar data were available for two other species, dogs and rabbits. His opinion was that if EPA had been willing to accept the data for dogs and rabbits as an alternative to the rat data, the safety factor might have been removed. The lack of a cumulative risk assessment in this case was not likely a problem, the reviewer said, because of the short half- life of the chemical, meaning that it would dissipate rapidly. Phosmet is a member of the largest class of insecticides, the organophosphates. It is a broad-spectrum insecticide that causes systemic toxicity by inhibiting cholinesterase, and there also have been concerns about carcinogenicity. Phosmet is marketed for agricultural uses, nonagricultural occupational uses, and homeowner uses to control pests, including moths, beetles, weevils, lice, flies, fleas, and ticks. It is used on a variety of fruit and vegetable crops (especially apples and peaches), tree crops, nut trees, cotton, and ornamentals and in forestry. In addition, phosmet is used for direct animal treatments on cattle, swine, and dogs (flea and tick treatments). There are 43 current food use tolerances for phosmet, of which 1 has been revoked voluntarily during the course of the review. The remainder have yet to be reassessed, pending the required cumulative assessment for the organophosphates. The Safety Factor Committee has reviewed phosmet twice since the law passed in 1996. The first review was through what OPP calls its “OP Marathon Meetings,” in which all of the organophosphate pesticides, including phosmet, were reviewed together. The Safety Factor Committee held its marathon meeting on June 15 and 16, 1998. In the case of phosmet, it concurred with the findings of the May 1998 toxicology review committee marathon meeting and recommended a 3-fold FQPA safety factor, based on data gaps noted by the toxicology group.Specifically, the marathon meetings found that there was no evidence of increased susceptibility to phosmet in young animals, but there were data gaps for two types of neurotoxicity studies, and the requirement for a developmental neurotoxicity study would depend on the results of those other studies. Because no data were available to assess neurotoxicity, cholinesterase inhibition, behavioral effects, or neuropathology for phosmet, the 3-fold safety factor was recommended. The second review of phosmet took place in summer 1999 after the registrant submitted new data, including the acute and subchronic neurotoxicity studies. On the basis of the results of these studies and additional information from the registrant, the toxicology review committee determined that the developmental neurotoxicity study was not required. The Safety Factor Committee then concluded that there were no remaining data gaps and no evidence of increased susceptibility. The committee reported that adequate actual data, surrogate data, and/or modeling outputs were available to satisfactorily assess dietary food and residential exposuresand to provide a screening level of drinking water exposure assessment. Consequently, the committee recommended that the FQPA safety factor be removed. OPP’s regulatory decision agreed with the Safety Factor Committee that no additional FQPA safety factor was needed for phosmet, and the reregistration document was revised to reflect the new data and decisions (see the version of October 7, 1999, and the February 9, 2000, public release version). The risk assessment for phosmet concluded that (1) dietary food and water risks are not a concern, even when combined, from either acute or chronic exposure, (2) risks from residential exposure to treated dogs and garden uses are a concern, especially for toddlers exposed to treated dogs, and (3) there also are concerns for workers handling the pesticide and regarding some ecological hazards (to birds, water, and honey bees). This latest revised risk assessment describes the risks associated with use of phosmet alone, and it may be revised again when EPA has conducted the necessary cumulative assessment for the organophosphates. Our reviewer concluded that through repeated reviews and reports, EPA’s consideration of the data on phosmet has been very thorough. The Safety Factor Committee reviews followed the standard operating procedures systematically, and decisions on the completeness and reliability of the data and the lack of data gaps were adequately justified. The committee’s conclusion that there was no apparent developmental or reproductive toxicity also was adequately justified and supported removal of the 10-fold safety factor. Our reviewer stated that phosmet is rapidly metabolized and degrades quickly under most environmental conditions; therefore it is seldom detected in food or water, and exposures usually are very low. This helps explain why aggregate exposure to phosmet is low. In our reviewer’s opinion, calculated exposure of children resulting from contact with treated dogs appeared to be conservative. Regarding the postponed cumulative risk assessment, he noted: “Having worked on the problem of cumulative risk assessment, I realize that issues relating to cumulative risks present a very complex and controversial problem—a problem that is not going to be easily solved. And the solution, when it comes, is not likely to be to anybody’s complete satisfaction.” Incidents of human poisoning from phosmet apparently are relatively common, because our reviewer said that phosmet accounts for the largest number of residential exposures to pesticides that result in treatment in a health care facility. But he observed that these incidents are not an indication of unusual toxicity as much as they are a result of incorrect use. Almost all of the poisoning incidents resulted from failure to properly dilute a concentrated formulation of phosmet for use as flea dip treatment for dogs. Regarding the question of whether phosmet is likely to cause cancer, our reviewer noted that the evidence is limited to reports of increased incidences of tumors in mouse livers (which were marginally statistically significant) and that there are differences of opinion as to the relevance of increased mouse liver tumors to human health risks. The following staff made key contributions to this work: Ellen M. Smith, Matthew W. Byer, Stanley G. Stenersen, and Katherine M. Iritani. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Ordersbymail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Ordersbyvisiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Ordersbyphone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
Pursuant to a congressional request, GAO provided information on the Environmental Protection Agency's (EPA) efforts to reduce children's exposure to pesticides by implementing the requirements of the Food Quality Protection Act (FQPA), focusing on the: (1) approach EPA has developed for making decisions about applying the new safety factor; (2) progress that has been made in considering aggregate exposure and cumulative effects; and (3) progress that has been made in reassessing tolerances for pesticide residues. GAO noted that: (1) when FQPA became law in 1996, EPA immediately began efforts to consider the additional safety factors for children, using available methods and data in an interim approach that has evolved over time; (2) an internal committee now recommends whether to apply the additional safety factor in pesticide reviews, based on data completeness and evidence of increased susceptibility in children; (3) using this approach, EPA has made decisions about applying the additional safety factor for 105 of the more than 450 pesticides to be reassessed; (4) it determined that an additional safety factor was necessary in 49 cases and not necessary in the remaining 56 cases; (5) EPA also had interim procedures in place for considering aggregate exposure, which incorporate available data on exposures from food, drinking water, and residential uses; (6) data on nonfood exposures have been lacking for most pesticides, however, and methods for estimating and combining such exposures are still being developed; (7) EPA has not yet begun to consider cumulative effects in the regulatory process; (8) it has determined that one group of pesticides that is considered to be high-risk, called the organophosphates, will need to be assessed for cumulative effect, but methods for doing so are still under development; (9) potential effects of considering aggregate exposure and cumulative effects are beginning to emerge; (10) on June 8, 2000, after applying the additional safety factor and conducting an aggregate exposure assessment for chlorpyrifos, EPA announced a need to substantially reduce children's exposure to this pesticide by reducing its use on foods frequently eaten by children and by eliminating nearly all household uses; (11) EPA has reported progress in reassessing existing tolerances for pesticide residues on foods, but relatively few of these allowable limits have changed as a result of considering FQPA's new requirements; (12) FQPA called for reassessing one-third of all existing tolerances by August 1999--a goal EPA met; (13) GAO analyzed a larger group, those counted as reassessed through April 2000; (14) for about 47 percent of these tolerances, the manufacturer agreed with EPA to eliminate the tolerances and withdraw the pesticides from those uses, before the additional safety factor or aggregate exposures were considered; (15) in most of these cases the pesticide was no longer being used on a particular food crop or the manufacturer decided not to maintain the ability to use it on a particular food crop; (16) most of the remaining reassessments in the group GAO analyzed resulted in no change; and (17) in reassessing tolerances, EPA has given priority to high-risk chemicals.
For years it has been widely recognized that the federal hiring process all too often does not meet the needs of (1) agencies in achieving their missions; (2) managers in filling positions with the right talent; and (3) applicants for a timely, efficient, transparent, and merit-based process. In short, the federal hiring process is often an impediment to the very customers it is designed to serve in that it makes it difficult for agencies and managers to obtain the right people with the right skills, and applicants can be dissuaded from public service because of the complex and lengthy procedures. Numerous studies over the past decade by OPM, the Merit Systems Protection Board (MSPB), the National Academy of Public Administration, the Partnership for Public Service, the National Commission on the Public Service, and GAO have identified a range of problems and challenges with recruitment and hiring in the federal government, including the following: Passive recruitment strategies. Poor and insufficient workforce planning. Unclear job vacancy announcements. Time-consuming and paperwork-intensive manual processes. Imprecise candidate assessment tools. Ineffective use of existing hiring flexibilities. These problems put the federal government at a serious competitive disadvantage in acquiring talent. For example, passive recruitment strategies, such as infrequent or no outreach to college campuses, miss opportunities to expose potential employees to information about federal jobs. Unclear and unfriendly vacancy announcements can cause confusion for applicants, delay hiring, and serve as poor recruiting tools. Weak candidate assessment tools can inadequately predict future job performance and result in the hiring of individuals who do not fully possess the appropriate skills for the job. As evidence of these and other problems, MSPB’s most recently published Merit Principles Survey results found that only 5 percent of federal managers and supervisors said that they faced no significant barriers to hiring employees for their agencies. In recent years, Congress, OPM, and agencies have taken a series of important actions to improve recruiting and hiring in the federal sector. For example, Congress has provided agencies with hiring flexibilities that could help agencies streamline their hiring processes and give agency managers more latitude in selecting among qualified job candidates. Congress has also provided several agencies with exemptions from the pay and classification restrictions of the General Schedule. Other examples of congressional action related to recruitment and hiring follow. Dual compensation waivers to rehire federal retirees. OPM may grant waivers allowing agencies to fill positions with rehired federal annuitants without offsetting the salaries by the amount of the annuities. Agencies can request waivers on a case-by-case basis for positions that are extremely difficult to fill or for emergencies or other unusual circumstances. Agencies can also request from OPM a delegation of authority to grant waivers for emergencies or other unusual circumstances. Special authority to hire for positions in contracting. Agencies can rehire federal annuitants to fill positions in contracting without being required to offset the salaries. Agencies are required only to notify and submit their hiring plans to OPM. Enhanced annual leave computation. Agencies may credit relevant private sector experience when computing annual leave amounts. As the federal government’s central personnel management agency, OPM has a key role in helping agencies acquire, develop, retain, and manage their human capital. In the areas of recruiting and hiring, OPM has, for example, done the following. Sponsored job fairs across the country and produced television commercials to make the public more aware of the work that federal employees do. Developed a 45-day hiring model to help agencies identify the steps in their processes that tend to bog them down, and created a detailed checklist to assist agencies in undertaking a full-scale makeover of their hiring process from beginning to end. Developed a Hiring Tool Kit on its Web site that is to aid agencies in improving and refining their hiring processes and that includes a tool to assist agency officials in determining the appropriate hiring flexibilities to use given their specific situations. Updated and expanded its report Human Resources Flexibilities and Authorities in the Federal Government, which serves as a handbook for agencies in identifying current flexibilities and authorities and how they can be used to address human capital challenges. Established standardized vacancy announcement templates for common occupations, such as secretarial, accounting, and accounting technician positions, into which agencies can insert summary information concerning their specific jobs prior to posting for public announcement. Individual federal agencies have also taken actions to meet their specific recruitment and hiring needs. For example: The National Aeronautics and Space Administration (NASA) has used a combination of techniques to recruit workers with critical skills, including targeted recruitment activities, educational outreach programs, improved compensation and benefits packages, professional development programs, and streamlined hiring authorities. Many of NASA’s external hires have been for entry-level positions through the Cooperative Education Program, which provides NASA centers with the opportunity to develop and train future employees and assess the abilities of potential employees before making them permanent job offers. The Nuclear Regulatory Commission (NRC) has endeavored to align its human capital planning framework with its strategic goals and has identified the activities needed to achieve a diverse, skilled workforce and an infrastructure that supports the agency’s mission and goals. NRC has used various flexibilities in recruiting and hiring new employees, and it has tracked the frequency and cost associated with the use of some flexibilities. While there was room for further improvement, NRC has been effective in recruiting, developing, and retaining a critically skilled workforce. While these actions are all steps in the right direction, our past work has found that additional efforts are needed in the areas of strategic human capital planning, diversity management, and the use of existing flexibilities. In addressing these areas, agency managers need to be held accountable for maximizing the efficiency and effectiveness of their recruiting efforts and hiring processes. In addition, OPM, working with and through the Chief Human Capital Officers Council, must use its leadership position to vigorously and convincingly encourage continuous improvement in agencies and provide appropriate assistance to support agencies’ recruitment and hiring efforts. In carrying out its important role, OPM will need to ensure that it has the internal capacity to assist and guide agencies’ readiness to implement needed improvements. I will discuss each one of these areas in turn. First and foremost, federal agencies will have to bolster their efforts in strategic human capital planning to ensure that they are prepared to meet their current and emerging hiring needs. To build effective recruiting and hiring programs, agencies must determine the critical skills and competencies necessary to achieve programmatic goals and develop strategies that are tailored to address any identified gaps. For example, an agency’s strategic human capital plan should address the demographic trends that the agency faces with its workforce, especially pending retirements. We have found that leading organizations go beyond a succession planning approach that focuses on simply replacing individuals; instead, agencies should consider their future mission requirements and the knowledge, skills, and abilities needed to meet those requirements. Recruiting and hiring for the acquisition workforce is a prime example of the government’s strategic human capital planning challenges. Acquisition of products and services from contractors consumes about a quarter of discretionary spending governmentwide and is a key function in many federal agencies. We have reported that many acquisition professionals need to acquire a new set of skills focusing on business management because of a more sophisticated business environment. At a GAO- sponsored forum in July 2006, acquisition experts reported that agency leaders had not recognized or elevated the importance of the acquisition profession within their organizations, and a strategic approach had not been taken across government or within agencies to focus on workforce challenges, such as creating a positive image essential to successfully recruiting and retaining a new generation of talented acquisition professionals. Developing and maintaining workforces that reflect all segments of society and our nation’s diversity is another significant aspect of agencies’ recruitment challenges. As we have previously reported, recruitment is a key first step toward establishing a diverse workforce. To ensure that they are reaching out to diverse pools of talent, agencies must consider active recruitment strategies, such as the following: Widening the selection of schools from which they recruit to include, for example, historically Black colleges and universities, Hispanic- serving institutions, women’s colleges, and schools with international programs. Building formal relationships with targeted schools and colleges to ensure the cultivation of talent for future applicant pools. Partnering with multicultural professional organizations and speaking at their conferences to communicate their commitment to diversity to external audiences and strengthen and maintain relationships. For these types of recruitment strategies, agencies can calculate the cost of recruiting channels and cross-reference those costs with the volume and quality of candidates yielded in order to reallocate funds to the most effective recruiting channels. Several agencies have taken steps toward developing and implementing active recruitment strategies that take into account a diverse pool of job candidates. For example: NASA developed a strategy for recruiting Hispanics that focuses on increasing educational attainment, beginning in kindergarten and continuing into college and graduate school, with the goal of attracting students into the NASA workforce and aerospace community. NASA said it must compete with the private sector for the pool of Hispanics qualified for aerospace engineering positions, which is often attracted by more-lucrative employment opportunities in the private sector in more preferable locations. NASA centers sponsored, and its employees participated in, mentoring, tutoring, and other programs to encourage Hispanic and other students to pursue careers in science, engineering, technology, and mathematics. An official with the National Institute of Standards and Technology (NIST) said that when NIST hosted recruitment or other programs, it made use of relationships the agency had with colleges, universities, and other groups to inform students about internship or employment opportunities. One group that helped to arrange such recruitment efforts was the National Organization of Black Chemists and Black Chemical Engineers. The NIST official said that NIST had been active in the professional organization’s leadership for years and that NIST employees had served on its executive board. Another NIST official said that the professional organization had helped with NIST’s efforts to recruit summer interns. The Federal Aviation Administration (FAA) developed internship opportunities designed to recruit a diverse group of future candidates for the agency. Its Minority-Serving Institutions Internship Program was designed to provide professional knowledge and experience at FAA or firms in the private sector for minority students and students with disabilities who are enrolled in a college or university, major in relevant fields and related disciplines, and have a minimum of a 3.0 grade point average. Students in the internship program could earn academic credit for their participation during the fall or spring semesters or over the summer. Additionally, the appropriate use of human capital flexibilities is crucial to making further improvements in agencies’ efforts to recruit, hire, and manage their workforces. Federal agencies often have varied statutory authorities related to workforce management. These authorities provide agencies with flexibility in helping them manage their human capital strategically to achieve results. In previous reports and testimonies, we have emphasized that in addressing their human capital challenges, federal agencies should first identify and use the flexibilities already available under existing laws and regulations and then seek additional flexibilities only when necessary and based on sound business cases. Our work has found that the insufficient and ineffective use of these existing flexibilities can significantly hinder the ability of federal agencies to recruit, hire, retain, and manage their human capital. The ineffective use of available hiring flexibilities represents a lost opportunity for agencies to effectively manage human capital. In 2002, Congress provided agencies with two new hiring flexibilities. One of these hiring flexibilities, known as category rating, permits an agency to select best-qualified job candidates for a position rather than being limited to the three top-ranked job candidates. The other hiring flexibility, often referred to as direct hire, allows an agency to appoint people to positions without adherence to certain competitive examination requirements when there is a severe shortage of qualified candidates or a critical hiring need. However, we have found that agencies were making limited use of these available flexibilities. Various agency officials from across the federal government often had previously cited both of these hiring flexibilities as needed tools to help in improving the federal hiring process. Agencies need to reexamine the flexibilities provided to them under current authorities and identify those that could be used more extensively or more effectively to meet their workforce needs. Our prior work has identified several human capital flexibilities that agency officials and union representatives frequently cited as most effective for managing their workforces. These flexibilities encompass broad areas of personnel- related actions that could be especially beneficial for agencies’ recruiting and hiring efforts. They include monetary recruitment and retention incentives; special hiring authorities, such as student employment programs; and work-life programs, such as alternative work schedules, child care assistance, and transit subsidies. As part of its key leadership role, OPM has taken significant steps in fostering and guiding improvements in recruiting and hiring in the executive branch. Still, OPM must continue to assist—and as appropriate, require—the building of the infrastructures within agencies needed to successfully implement and sustain human capital reforms to strengthen recruitment and hiring. OPM can do this in part by encouraging continuous improvement and providing appropriate assistance to support agencies’ recruitment and hiring efforts. Innovative and best practices of model agencies need to be made available to other agencies in order to facilitate the transformation of agency hiring practices from compliance based to agency mission based. OPM, working with and through the Chief Human Capital Officers Council, has made progress in compiling information on effective and innovative practices and distributing this information to help agencies in determining when, where, and how the various flexibilities are being used and should be used. OPM must continue to work to ensure that agencies take action on this information. Moreover, in leading governmentwide human capital reform, OPM has faced challenges in its internal capacity to assist and guide agencies’ readiness to implement change. In October 2007, we issued a report on the extent to which OPM has (1) addressed key internal human capital management issues identified through employee survey responses and (2) put in place strategies to ensure that it has the mission-critical talent it needs to meet current and future strategic goals. We found that OPM has taken positive actions to address specific concerns raised by its employees and managers in the employee surveys. We also found that OPM has strategies in place, such as workforce and succession management plans, that are aligned with selected leading practices relevant to the agency’s capacity to fulfill its strategic goals. However, OPM lacks a well- documented agencywide evaluation process of some of its workforce planning efforts. In a relatively short time, there will also be a presidential transition, and well-documented processes can help to ensure a seamless transition that builds on the current momentum. Equally important is OPM’s leadership in federal workforce diversity and oversight of merit system principles. In our review of how OPM and the Equal Employment Opportunity Commission (EEOC) carry out their mutually shared responsibilities for helping to ensure a fair, inclusive, and nondiscriminatory federal workplace, we found limited coordination between the two agencies in policy and oversight matters. The lack of a strategic partnership between the two agencies and an insufficient understanding of their mutual roles, authority, and responsibilities can result in a lost opportunity to realize consistency, efficiency, and public value in federal equal employment opportunity and workplace diversity human capital management practices. We recommended that OPM and EEOC regularly coordinate in carrying out their responsibilities under the equal employment opportunity policy framework and seek opportunities for streamlining like reporting requirements. Both agencies acknowledged that their collaborative efforts could be strengthened but took exception to the recommendation to streamline requirements. We continue to believe in the value of more collaboration. Finally, OPM and agency leaders need to be held accountable and should hold others accountable for the ongoing monitoring and refinement of human capital approaches to recruit and hire a capable and committed federal workforce. Leadership is critical for agencies to overcome their natural resistance to change, to marshal the resources needed in many cases to improve management, to build and maintain organizationwide commitment to improving their ways of doing business, and to create the conditions for effectively improving human capital approaches. Some agency officials have told us that OPM rules and regulations are rigid, yet agency officials are also often hesitant to implement new approaches without specific guidance. It will be important for agencies and OPM to define their appropriate roles and day-to-day working relationships as they collaborate on developing and implementing innovative and more effective recruitment efforts and hiring processes. In conclusion, OPM and agencies have made progress in addressing the impediments to effective recruitment and hiring since we first designated strategic human capital management as a high-risk area in 2001. Still, as I have discussed today, more can be done. Faced with a workforce that is becoming more retirement eligible and finding gaps in talent because of changes in the knowledge, skills, and competencies in occupations needed to meet their missions, agencies must strengthen their recruiting and hiring efforts. Moreover, human capital expertise within the agencies must be up to the challenge for this transformation to be successful and enduring. With an ongoing commitment to continuous improvement and strong leadership in Congress, OPM, and the agencies, the federal government can indeed be an employer of choice. Mr. Chairman and members of the subcommittee, this completes my prepared statement. I would be pleased to respond to any questions you may have at this time. For further information regarding this statement, please contact Robert N. Goldenkoff, Director, Strategic Issues, at (202) 512-6806 or [email protected]. Individuals making key contributions to this testimony include K. Scott Derrick, Assistant Director; Steven Berke; Janice Latimer; Sabrina Streagle; and Jessica Thomsen. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
To address the challenges that the nation faces, it will be important for federal agencies to change their cultures and create the institutional capacity to become high-performing organizations. This includes recruiting and retaining a federal workforce able to create, sustain, and thrive in organizations that are flatter, results-oriented, and externally focused. In 2001, GAO identified strategic human capital management as a governmentwide high-risk area because federal agencies lacked a strategic approach to human capital management that integrated human capital efforts with their missions and program goals. Although progress has been made since that time, strategic human capital management still remains a high-risk area. This testimony, based on a large body of completed work issued from January 2001 through April 2008, focuses on (1) challenges that federal agencies have faced in recruiting and hiring talented employees, (2) progress in addressing these challenges, and (3) additional actions that are needed to strengthen recruiting and hiring efforts. In its prior reports, GAO has made a range of recommendations to the Office of Personnel Management (OPM)--the government's personnel agency--and to agencies in such areas as hiring, workforce planning, and diversity management; a number of these recommendations have since been implemented. GAO is making no new recommendations at this time. Numerous studies over the years have identified a range of problems and challenges with recruitment and hiring in the federal government. Some of these problems and challenges include passive recruitment strategies, unclear job vacancy announcements, and manual processes that are time consuming and paperwork intensive. In recent years, Congress, OPM, and agencies have made important strides in improving federal recruitment and hiring. For example, Congress has provided agencies with hiring flexibilities that could help to streamline the hiring process. OPM has sponsored job fairs and developed automated tools. Individual agencies have developed targeted recruitment strategies to identify and help build a talented workforce. Building on the progress that has been made, additional efforts are needed in the following areas: (1) Human capital planning: federal agencies will have to bolster their efforts in strategic human capital planning to ensure that they are prepared to meet their current and emerging hiring needs. Agencies must determine the critical skills and competencies necessary to achieve programmatic goals and develop strategies that are tailored to address any identified gaps. (2) Diversity management: developing and maintaining workforces that reflect all segments of society and our nation's diversity is another significant aspect of agencies' recruitment challenges. Recruitment is a key first step toward establishing a diverse workforce. Agencies must consider active recruitment strategies, such as building formal relationships with targeted schools and colleges, and partnering with multicultural professional organizations. (3) Use of existing flexibilities: agencies need to reexamine the flexibilities provided to them under current authorities, including monetary recruitment and retention incentives, special hiring authorities, and work-life programs. Agencies can then identify those existing flexibilities that could be used more extensively or more effectively to meet their workforce needs. (4) OPM leadership: OPM has taken significant steps in fostering and guiding improvements in recruiting and hiring in the executive branch. For example, OPM, working with and through the Chief Human Capital Officers Council, has moved forward in compiling information on effective and innovative practices and sharing this information with agencies. Still, OPM must continue to work to ensure that agencies take action on this information. Also, OPM needs to make certain that it has the internal capacity to guide agencies' readiness to implement change and achieve desired outcomes. OPM and agencies should be held accountable for the ongoing monitoring and refinement of human capital approaches to recruit and hire a capable and committed federal workforce. With continued commitment and strong leadership, the federal government can indeed be an employer of choice.
The Defense Logistics Agency (DLA) is the primary manager of consumable supplies, including hardware items, used by the military services. Hardware items encompass a large part of DLA’s overall operations. As shown in table 1, DLA manages about 4 million items of which 3.9 million, or 97 percent, are classified as hardware items. As of September 30, 1996, DLA’s hardware inventory, valued at $5.7 billion, accounted for 74 percent of DLA’s total consumable inventory. Traditionally, DLA buys hardware items in large quantities, stores them in distribution depots until they are requested by the services, and then ships them to the appropriate service facility. For example, the services operate over 20 repair depots where large amounts of these items are used for regularly scheduled maintenance of equipment and weapon systems. To store and distribute hardware items, DLA uses storage structures at 24 distribution depots, which are DOD facilities with several large warehouses, as well as 50 or more additional storage sites. In fiscal year 1996, DLA filled about 12 million requests for hardware items. DLA’s fiscal year 1996 material management costs for hardware items were reported at about $3.6 billion. Of that amount, about $2.6 billion was spent to purchase hardware items from commercial suppliers and $1 billion was spent to manage and distribute inventory. To recover its operating costs, DLA charges the military services the cost of the item plus a surcharge, which covers supply center and distribution expenses, inflation, and material-related expenses such as inventory losses. In fiscal year 1996, the surcharge averaged about 39 percent for hardware items. In contrast, DLA has lowered the surcharge for medical supplies from 21.7 percent to 7.9 percent using best management practices from the private sector. DOD continues to use outdated and inefficient business practices to manage its hardware inventory. For example, DOD buys inventory years in advance of when the items are actually used. Based on our analyses of DLA records, 62 percent of DLA’s hardware items did not have a demand from September 1995 to August 1996 (see fig. 1). We found an additional 21 percent of DLA’s hardware items had enough inventory on hand to last for more than 2 years based on demands during the same period. These items accounted for about $4.4 billion, or 77 percent, of DLA’s $5.7 billion hardware inventory. Items with no demand (Sept. 1995 to Aug. 1996) DLA inventory is stored at multiple locations nationwide to support all DOD customers. As of September 30, 1996, DLA reported it was storing $5.7 billion worth of hardware items in distribution depots and warehouses. Based on inventory levels and past demands for items, we estimate that this inventory could satisfy DOD’s requirements, on average, for the next 2 years. As shown in figure 2, a base-level logistics system can also hold millions of dollars of hardware inventory. When DLA-owned and service-owned inventories are combined, the total inventory levels could meet current DOD requirements, in some cases, for many years. Despite this large investment in inventory, DOD’s supply system frequently does not meet the needs of its customers. As of September 1996, DLA reported it had over 574,000 customer orders, valued at $843 million, that it could not fill because it did not have the right stock on hand. Customers had been waiting on parts for an average of over 3 months. Also, the base-level supply system frequently could not fill orders placed by mechanics and other customers. For example, according to Army records, the base warehouse at one Army depot did not fully meet customer orders 76 percent of the time during fiscal year 1996. At four other locations we examined, base-level systems did not meet customer needs between 30 and 72 percent of the time. When hardware items are not immediately available to mechanics, the repair of weapon systems and their components is delayed, which increases repair times. For example, the Navy calculates that the lack of parts increases the repair time for aviation parts by as much as 74 percent. As of January 1997, the Navy reported it had stopped repairing over 12,000 aircraft components, valued at $516 million, because parts were not available to complete repairs. The partially repaired items were packaged and moved to a warehouse next to the repair facility. At the time of our review, these items had been in storage for an average of 230 days. Also, according to Air Force records, at one Air Force depot location, mechanics stopped repairs on 2,748 items, valued at $193 million, because necessary parts were not available. DOD recognizes that it cannot continue to use outdated and inefficient business practices. Due to the pressures of budgetary constraints, DOD has recognized that it must seek ways to make logistics processes as efficient as possible. As a result, the Office of the Secretary of Defense has encouraged DLA and the military services to use alternatives to DOD’s traditional logistics systems, such as innovative logistics concepts used by commercial firms to improve operations. Some of the alternatives are new concepts that private sector companies have successfully used during the past decade to improve their management of consumable items. These items were targeted because they are generally standard items with a low unit cost, are commonly stocked by several suppliers, and are used in large quantities. In general, these concepts provide inventory users with a capability to order supplies as they are needed and then receive those items within hours after an order is placed. Ordering supplies only as they are needed, combined with quick logistics response times, enable companies to reduce or eliminate the possibility of inventory spoilage or obsolescence and reduce overall supply system costs. In prior reports, we highlighted three concepts, or best practices, that reflect this new business philosophy in the management of consumable items (see table 2). Each of these practices has resulted in significant savings for the companies that used them and improved their inventory management systems. We recommended that DOD test these concepts and expand them, where feasible, to more defense facilities. Of the three concepts—prime vendor, supplier park, and integrated supplier—we believe the integrated supplier offers DOD the greatest opportunity for streamlining its logistics operations, reducing costs, and improving customer service. The companies that have adopted these best practices have significantly reduced their logistics costs. For example, as we reported in December 1991, Vanderbilt University Medical Center reduced inventory levels by $1.7 million (38 percent) through the use of a prime vendor program. In 1993, we reported PPG Industries eliminated $4.5 million (80 percent) in maintenance and repair supplies and saved about $600,000 in annual operating costs by locating 10 suppliers’ activities at a supplier park about 600 yards from the PPG facility. In 1996, we found that a leading distributor of aircraft supplies reported its integrated supplier program reduced one customer’s inventory by $7.4 million (84 percent) while filling 98 percent of the customer’s orders within 24 hours. DOD has demonstrated that best practices can be applied to DOD operations. Starting in 1993, DOD successfully applied the prime vendor concept to its management of medical supplies. The prime vendor, which delivers items to DOD hospitals when ordered, has enabled DOD to reduce the need to store and distribute medical supplies. As the prime vendor concept was established nationwide, inventory levels began to decline, and warehouses once filled with medical items were emptied. DOD’s prime vendor for medical supplies, along with other inventory reduction efforts, has resulted in savings that we estimate exceed $700 million. To its credit, DLA has tried new inventory practices for managing hardware items. However, despite DOD’s success with its prime vendor program for medical supplies, its efforts for hardware items are limited in scope and represent only a small part of DLA’s logistics operations. To achieve greater inventory reductions, infrastructure savings, and improved customer service that we have seen in the private sector, we believe DOD needs to expand its use of private sector inventory practices, such as prime vendors and integrated suppliers, and use the full range of services offered under these programs. Since 1992, one of DLA’s main initiatives has been a direct vendor delivery program. Under this program, DLA uses long-term contracts and electronic data systems to enable certain suppliers to deliver items directly to military installations instead of delivering the items to DLA storage sites. In fiscal year 1996, DLA reported that 17 percent of hardware sales were filled using the direct vendor delivery program. As shown in figure 3, this percentage has not varied much since 1992. While DLA’s use of direct vendor delivery has remained fairly stable since 1992, so have DLA’s hardware inventory levels (see fig. 4). While the direct delivery program eliminates the need to store and distribute inventory from DLA warehouses, lowering the cost to DOD customers, it has not provided a quick response to customer orders because the traditional DOD ordering process has not changed. With this program, requisitions are still sent from the services to DLA, where the orders are then relayed to a supplier. Upon receipt of an order, the supplier ships the items to the appropriate military installation. According to DLA records, with the direct delivery program, in 1996 it took an average of 54 days for customers to receive ordered items, or twice as long as the 25-day delivery average for items stocked in DLA warehouses. Both of these delivery times are significantly longer than the times prime vendors or integrated suppliers have achieved—within 24 hours of receiving an order (see fig. 5). In fiscal year 1997, DOD began using a prime vendor concept, called the Virtual Prime Vendor program, for hardware supplies on a limited basis. One of the two testing areas was supply support of depot repair operations. In February 1997, DOD began using a prime vendor program to support the C-130 propeller repair shop at the Warner-Robins Air Logistics Center (ALC). DLA established this program to determine the feasibility of using prime vendors for hardware items instead of the traditional military supply system and to improve service, reduce inventories, and lower costs. Because the program was only recently initiated, DOD had not yet evaluated the program’s results at the time of our review. By the second quarter of fiscal year 1998, the Air Force plans to expand the prime vendor program at Warner-Robins ALC and begin programs at two other Air Force repair depots. The Navy plans to test the concept at one depot location (see table 3). The Army has not yet developed a program to test the prime vendor concept at a repair depot or at any operating base repair activities. We estimate that DOD’s programs, when implemented, will apply to about 2 percent of DLA’s $3.1 billion annual sales of hardware items. Also in February 1997, DLA began using the prime vendor concept for facilities maintenance supplies such as plumbing, electrical, and lumber items. Under this concept, a prime vendor serves a geographic region where all military facilities within the region can elect to order maintenance supplies from the vendor. As of July 1997, 9 of 73 military facilities in the first region had elected to use the prime vendor program. By June 1999, DLA plans to have a prime vendor under contract for 10 geographic regions, covering the United States and overseas locations. As of July 1997, facilities in only 4 of the 10 regions had committed to use the program. In June 1997, the Under Secretary of Defense (Comptroller)/Chief Financial Officer endorsed this concept and asked the Director of DLA, in conjunction with the military services, to develop a regional implementation plan for the DLA prime vendor program for facilities maintenance supplies. He asked that the plan identify the critical events and site designations for regional implementation within 12 months and provide for nationwide availability by the middle of fiscal year 1999. We believe this plan is critical to the program’s success because it demonstrates top-management support, and it will further encourage military units to use the prime vendor services once they are established. DOD’s prime vendor programs for hardware items, which are similar to the best practices we observed in the private sector, can be expanded to achieve greater savings while improving service. For example, neither DLA’s direct delivery nor prime vendor programs streamline the services’ base-level logistics systems to the extent we have seen in the private sector. DOD personnel still order, receive, store, and distribute material to the end users. If DOD transferred these functions to a prime vendor or to an integrated supplier, it could achieve substantial reductions in resource requirements and improve service to its customers. This action would also allow items to be bought at the time when they are actually needed, therefore minimizing the potential of inventory obsolescence. As figure 6 shows, the DLA wholesale system, and at least two of three primary storage points in the base-level supply system, could be bypassed by applying the integrated supplier concept because the integrated supplier would deliver inventory directly to maintenance shops or end-user locations. The integrated supplier could also monitor storage bins, order parts, and restock bins once parts are delivered. In the private sector, having the supplier deliver inventory directly to these locations improves the availability of inventory and actively involves the supplier as a “partner” in the customer’s operations. The supplier also becomes involved in testing parts for quality and monitoring part usage and ordering supplies when needed. According to DOD officials, there are no major impediments to adopting best practices such as prime vendor, supplier park, and integrated supplier concepts. However, DOD’s success in expanding these concepts to encompass a larger part of its operations will depend on its ability to address two key factors. Specifically, (1) DOD may need to prepare a cost comparison between government and the commercial providers in accordance with the Office of Management and Budget (OMB) Circular A-76 and (2) military customers will have to overcome their reluctance to trying new business practices. According to Air Force officials, a prime vendor program that would replace the base-level supply system and would involve more than 10 government personnel may not be contracted out without a cost comparison in accordance with OMB Circular A-76. According to the Air Force, the Warner-Robins ALC has about 219 government personnel involved in supply operations. Air Force officials stated that if these positions were eliminated through the prime vendor program, a cost comparison would first be required, which may take 2 years to complete. We agree that a cost comparison could be a significant issue in implementing these programs. Our work has consistently shown, however, that this process is cost-effective because competition generates savings—usually through a reduction in personnel—whether the competition is won by the government or the private sector. Another factor is that military service customers have been reluctant to try the new business practices. DOD has traditionally relied on its own internal logistics system to support its logistics needs—a philosophy that private companies have moved away from to lower the cost of doing business, provide better service, and remain competitive. According to DLA, it has been a challenge to get the services to agree to use the prime vendor programs. For example, DLA has laid out an implementation schedule for its facilities and maintenance prime vendor program, but, to date, the services have committed to use this program for less than 20 percent of the demands for these items. In another example, the Army has yet to establish a test program to determine the feasibility of using prime vendors or integrated suppliers at its repair facilities. Without the commitment of the services to these programs, DOD’s success in improving its operations will be limited. The “corporate culture” within DOD has been traditionally resistant to change. Organizations often find changes in operations threatening and are unwilling to change current behavior until proposed ideas have been proven. In June 1994, we convened a symposium on reengineering that brought together executives from five Fortune 500 companies that have been successful in reengineering activities. Panel members at the symposium expressed the view that committed and engaged top managers must support and lead reengineering efforts to ensure success because top management has the authority to encourage employees to accept reengineered roles. Also, top management has the responsibility to set the corporate agenda and define the organization’s culture and the ability to remove barriers that block changes to the corporate mindset. There is a high potential for DOD to greatly expand the use of the private sector best practices we have recommended to improve logistics operations and lower costs. DOD has adopted the prime vendor concept to improve the management of medical inventories, demonstrating that such private sector practices can be applied to DOD operations. However, DOD has adopted a prime vendor program for hardware items only in a limited way and the other changes that have been introduced have not resulted in significant improvements. In addition, the services have been slow to adopt these initiatives into their operations. For example, the Army has yet to establish a plan to test the prime vendor concept at repair depots and the Navy plans to only begin testing this concept in fiscal year 1998. To ensure the military services pursue best practices to the maximum extent possible, DOD’s top management needs to continue its commitment to changing its inventory management culture and further motivate the services to use these practices. To encourage DLA and the services to more aggressively apply best practices to its operations, we recommend that the Secretary of Defense: Identify a “Champion of Change” within the Office of the Secretary of Defense that would be responsible for coordinating and overseeing improvement initiatives throughout DOD’s operations and ensuring the prime vendor and integrated supplier concepts (1) encompass a broader part of DOD’s operations, (2) fully use the services offered in the private sector, and (3) are used by all military services whenever it is cost effective to do so. Direct (1) the Secretary of the Army to identify at least one repair depot location that will join the other services in testing the prime vendor concept and (2) the secretaries of the military services to identify repair activities at operating bases as test sites. Direct the Director of DLA and the secretaries of each military service to establish a test of the integrated supplier concept at one or more repair depots. DLA and the military services should (1) establish aggressive milestones for testing and implementing the prime vendor and integrated supplier programs so as not to delay implementing such programs if the tests find them to be feasible and (2) develop the means to expeditiously measure the total costs and benefits under the prime vendor and integrated supplier programs to compare them to the total costs and benefits incurred under the traditional system. In commenting on a draft of this report, DOD generally concurred with the findings and recommendations. DOD stated that the Office of the Deputy Under Secretary of Defense (Logistics) is responsible for coordinating and overseeing material management improvement initiatives throughout DOD and will be responsible for ensuring that private sector practices are used by the military services to the maximum extent possible where it meets readiness requirements and is cost-effective to do so. According to DOD, it will direct the Army to identify a repair depot that will test the prime vendor concept. It will also direct DLA and the military services to identify one or more repair depots to test the integrated supplier concept. DOD also agreed to identify repair activities at operating bases that would test the prime vendor concept and DOD expects to have test sites designated by June 30, 1998. We plan to closely monitor DOD’s progress in establishing aggressive milestones for testing and implementing these concepts and in developing the means for measuring the total costs and benefits incurred from these tests. DOD’s comments are included in appendix I. We reviewed documents and interviewed officials on DOD’s logistics policies, practices, and efforts to improve its operations. We contacted officials at the Office of the Deputy Under Secretary of Defense for Logistics, Washington, D.C.; DLA Headquarters, Fort Belvoir, Virginia; Air Force Materiel Command, Wright-Patterson Air Force Base, Ohio; Naval Supply Systems Command, Mechanicsburg, Pennsylvania; Naval Air Systems Command, Arlington, Virginia; and the Army Industrial Operations Command, Rock Island, Illinois. Also, we discussed the potential applications of private sector logistics practices to DOD’s operations and any impediments to using these practices with these officials. To determine the nature and extent of DOD’s progress in adopting best practices, we visited the following organizations: Defense Supply Center Richmond, Richmond Virginia; Defense Supply Center Columbus, Columbus, Ohio; Defense Industrial Supply Center, Philadelphia, Pennsylvania; and Warner-Robins ALC, Robins Air Force Base, Georgia. These locations are involved in initiatives that are intended to improve DOD’s logistics operations. At these locations, we discussed (1) inventory management practices that DOD is using for hardware items; (2) best practices, programs, and tests underway or planned to improve DOD operations; and (3) DOD officials’ positions on the use of best practices as alternatives to traditional DOD inventory practices. At Warner-Robins ALC, the pilot location for several of DOD’s initiatives, we discussed with supply and maintenance personnel the results of the initiatives and the impacts on supply operations. Also during our review, we obtained and analyzed detailed information on inventory levels and usage, supply effectiveness and response times, operating costs, and other related logistics performance measures. Except where noted, our data reflected inventory valued by DOD using its standard inventory valuation method—inventory valued at latest acquisition costs and inventory classified as excess valued at salvage prices (3.2 percent of its latest acquisition costs). We did not test or otherwise validate DOD’s inventory data. To identify leading business practices, we used information from our series of 10 reports that have been issued since 1991. This information included the results of an extensive literature search of leading inventory management concepts and detailed examinations and discussions of logistics practices used by companies such as PPG Industries, Bethlehem Steel, British Airways, United Airlines, and Tri-Star Aerospace. We also participated in roundtables, symposiums, and conferences with recognized leaders in the logistics field to obtain information on how companies are applying integrated approaches to their logistics operations and establishing supplier partnerships to eliminate unnecessary functions and reduce costs. We did not independently verify the accuracy of logistics costs and performance measures provided by the private sector organizations. We conducted our review from January 1997 to October 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Defense, the Army, the Air Force, and the Navy; the Directors of DLA and OMB; and other interested parties. We will make copies available to others upon request. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. The major contributors to this report are listed in appendix II. The following is a GAO comment on the Department of Defense’s (DOD) letter dated December 8, 1997. 1. In commenting on a draft of this report, DOD stated that the reported value of hardware inventories includes inventory transferred from the military departments as part of DOD’s consumable item transfer program. According to DOD, when those transferred items are excluded, the Defense Logistics Agency’s (DLA) inventory of consumable items decreased 36 percent between fiscal year 1992 and 1996. We qualified our report to address DOD’s concerns. However, since these items are now a part of DLA’s total hardware inventories, we believe aggressive steps are needed to reduce such inventories, which are currently large enough to meet DOD’s requirements for the next 2 years. By expanding the use of best practices, DLA could further reduce its hardware inventories and lower its operating costs. Charles I. (Bud) Patton, Jr. Kenneth R. Knouse, Jr. Inventory Management: Greater Use of Best Practices Could Reduce DOD’s Logistics Costs (GAO/T-NSIAD-97-214, July 24, 1997). Inventory Management: The Army Could Reduce Logistics Costs for Aviation Parts by Adopting Best Practices (GAO/NSIAD-97-82, Apr. 15, 1997). Defense Inventory Management: Problems, Progress, and Additional Actions Needed (GAO/T-NSIAD-97-109 Mar. 20, 1997). Inventory Management: Adopting Best Practices Could Enhance Navy Efforts to Achieve Efficiencies and Savings (GAO/NSIAD-96-156, July 12, 1996). Best Management Practices: Reengineering the Air Force’s Logistics System Can Yield Substantial Savings (GAO/NSIAD-96-5, Feb. 21, 1996). Inventory Management: DOD Can Build on Progress in Using Best Practices to Achieve Substantial Savings (GAO/NSIAD-95-142, Aug. 4, 1995). Commercial Practices: DOD Could Reduce Electronics Inventories by Using Private Sector Techniques (GAO/NSIAD-94-110, June 29, 1994). Commercial Practices: Leading-Edge Practices Can Help DOD Better Manage Clothing and Textile Stocks (GAO/NSIAD-94-64, Apr. 13, 1994). Commercial Practices: DOD Could Save Millions by Reducing Maintenance and Repair Inventories (GAO/NSIAD-93-155, June 7, 1993). DOD Food Inventory: Using Private Sector Practices Can Reduce Costs and Eliminate Problems (GAO/NSIAD-93-110, June 4, 1993). DOD Medical Inventory: Reductions Can Be Made Through the Use of Commercial Practices (GAO/NSIAD-92-58, Dec. 5, 1991). Commercial Practices: Opportunities Exist to Reduce Aircraft Engine Support Costs (GAO/NSIAD-91-240, June 28, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) progress in adopting inventory management practices for hardware items, focusing on: (1) DOD and private-sector practices for managing hardware items; (2) whether DOD has adopted best practices for these items; and (3) opportunities that DOD can take advantage of to improve its management of hardware items. GAO noted that: (1) while DOD has implemented some innovative management practices, more opportunities exist to better manage its reported $5.7-billion hardware inventory and achieve substantial savings; (2) DOD continues to manage its hardware inventory using outdated and inefficient business practices that create unnecessary inventory levels, provide poor customer service, generate excess and obsolete inventory, and cost approximately $1 billion per year to manage and distribute; (3) DOD buys hardware inventory years in advance of when the items are actually used; (4) for example, based on GAO's analysis of DOD records, 62 percent of DOD's hardware items did not have a demand from September 1995 to August 1996, and an additional 21 percent of the items had enough inventory to last for more than 2 years; (5) these items account for about $4.4 billion, or 77 percent, of DOD's $5.7-billion hardware inventory; (6) despite DOD's substantial investment in inventory, in many cases, hardware inventory is not available when needed by DOD customers; (7) when this happens, the repair of weapon systems and components is often delayed; (8) the Navy has estimated that the lack of parts increases the repair time for aviation parts by as much as 74 percent; (9) DOD's overall progress in adopting best management practices for hardware items has been limited; (10) in February 1997, DOD began testing, on a limited basis, the prime vendor concept for hardware items--one of the concepts GAO recommended; (11) these tests will potentially affect about 2 percent of DOD's $3.1-billion annual sales of these items; (12) these tests do not, however, fully optimize the services available in the private sector, such as ordering, storing, and distributing supplies to the customer; (13) the business practices GAO recommended in its past reports have, for the most part, been used in the private sector to provide customers with a capability to order supplies as they are needed; (14) ordering supplies as they are needed, combined with quick logistics response times, reduces overall supply system costs, eliminates large inventories, and enables companies to reduce or eliminate the possibility of ordering supplies that may not be needed or become obsolete; and (15) to achieve similar inventory reductions, infrastructure savings, and improved customer service, DOD could expand its prime vendor programs to include tasks such as ordering, storing, and distributing supplies to the customer, and fully use the services offered under these programs.
Within DHS, three components have responsibilities for conducting border and maritime R&D—S&T, the Coast Guard, and DNDO. S&T has five technical divisions responsible for managing the directorate’s R&D portfolio and coordinating with other DHS components to identify R&D priorities and needs. The Borders and Maritime Security Division (BMD) is responsible for most of S&T’s border and maritime related R&D, and its primary DHS customer is CBP. Most of S&T’s R&D portfolio consists of applied and developmental R&D, which is R&D that can be transitioned to use within 3 years, as opposed to longer-term basic research. In addition to conducting projects for its DHS customers, S&T conducts projects for other federal agencies and first responders. The S&T Office of University Programs also manages the DHS Centers of Excellence, which constitute a network of universities that conduct research for DHS component agencies on topics ranging from animal disease defense to catastrophic event preparedness. Of the nine funded centers, two are dedicated to border and maritime R&D—The National Center for Border Security and Immigration (NCBSI), led by the University of Arizona in Tucson and the University of Texas at El Paso, and the Center for Maritime, Island and Remote and Extreme Environment Security (MIREES), led by the University of Hawaii and Stevens Institute of Technology. Centers are typically funded through cooperative agreements for 5-to-6 year periods, with a review period each year. Ideas for projects to be undertaken by the centers are solicited at technical workshops with component-level subject matter experts, where the centers and DHS officials discuss technical or informational challenges. The Office of University Programs drafts these topics into research questions which the Office publishes in a funding opportunity announcement. The Office of University Programs then examines proposals it receives based on how the research could potentially further DHS’s mission. Centers also hold annual meetings where officials are expected to brief DHS leadership on their work and their plans. Additionally, DHS stakeholders will give presentations on their technology needs and challenges. The centers are then expected to incorporate these needs and challenges into their programs. The Coast Guard is a multimission, maritime military service within DHS. The Coast Guard’s R&D efforts are conducted and managed by its Research, Development, Test, & Evaluation (RDT&E) Program, which consists of the Office of RDT&E and the Research and Development Center. The center performs research, development, testing, and evaluation in support of all Coast Guard missions, as required. The majority of the Coast Guard’s R&D products are knowledge products, such as acquisition analysis studies or technical reports, as opposed to specific pieces of technology or prototypes. Its end users are typically other units within the Coast Guard, such as its Office of Boat Forces or Deployable Specialized Forces. DNDO also conducts R&D applicable to border and maritime security, as it relates to its mission of detecting the use of an unauthorized nuclear explosive device, fissile material, or radiological material in the United States. After its establishment, in 2005, DNDO assumed responsibility from S&T for certain nuclear and radiological R&D activities, and its R&D efforts are primarily conducted and managed by its Transformational and Applied Research Directorate (TARD). DNDO’s R&D efforts result in technology prototypes, development of software, and computer modeling for the detection of radioactive and nuclear materials. These efforts are crosscutting, meaning they can be used in more than just a border and maritime environment. Each of DHS’s border and maritime R&D organizations uses a different process to determine which R&D projects to pursue. S&T BMD reaches out to DHS-level officials as well as with operational-level end users, such as Border Patrol agents, to discuss needs, resources, and priorities and to determine which projects to initiate and also which projects should be continued or discontinued. BMD officials said that depending upon whom they speak with—that is, headquarters-level officials versus field-level operators—they often receive different answers regarding needs and priorities. Further, it is the role of the S&T project manager to facilitate agreement and consensus among the different offices within the component. The Coast Guard seeks input from its internal offices and its long-term strategies to identify capability gaps or ideas for new technological solutions, which are then evaluated based on available funding and other priorities into a prioritized ranking of projects that can be typically executed within 2 fiscal years. Some projects require more than 2 years. DNDO officials stated that their process for selecting and prioritizing projects is based on a review of capability gaps and government priorities in accordance with the Global Nuclear Detection Architecture as well as the Nuclear Defense R&D Road Map fiscal years 2013 to 2017. DNDO officials also stated that they consider what technologies exist before considering advanced technology development and their goal is to complete an R&D project with a proof-of-concept study or a prototype. The S&T Directorate, DNDO, and the Coast Guard are each appropriated funding for R&D. Table 1 provides DHS’s R&D budgets from fiscal years 2010 through 2013 for the various entities that conduct border and maritime R&D. A portion of each component’s budget is dedicated to border and maritime R&D. However, as we reported in September 2012, DHS did not know how much its components invest in R&D, making it difficult to oversee R&D efforts across the department. For example, we reported that data DHS submitted to OMB showed that DHS’s R&D budget authority and outlays were underreported because DNDO did not properly report its R&D budget authority and outlays to OMB for fiscal years 2010 through 2013. Specifically, for fiscal years 2010 through 2013, DHS underreported its total R&D budget authority by at least $293 million and outlays for R&D by at least $282 million because DNDO did not accurately report the data. We also identified an additional $255 million in R&D obligations for fiscal year 2011 by other DHS components. Further, we found that DNDO did not report certain R&D budget data to OMB, and R&D budget accounts include a mix of R&D and non-R&D spending, further complicating DHS’s ability to identify its total investment in R&D. As of June 2013, DHS had 95 ongoing R&D projects related to border and maritime security. See table 2 below for the number and total anticipated cost of ongoing border and maritime R&D projects by performing office. It is important to note the total amount of resources and spending dedicated to R&D projects, and the final result and impact of these projects can vary dramatically based on the scope and purpose of the project. Some R&D projects aim to produce a specific prototype or piece of technology for an end user, such as sensors that CBP can use to better detect tunnels, nonlethal weapons Coast Guard can use to disable a boat’s outboard engines, or a range of contraband marker systems. Other projects may produce software to integrate information technology systems, such as software to integrate and display information gathered by multiple sensor systems. Finally, other projects may produce a report or knowledge product, which aims to inform an acquisition decision, such as providing the preliminary research required for developing a statement of work—a key step in the acquisition process. Individual R&D projects can range in cost from several thousand dollars to millions per fiscal year. Between fiscal years 2010 and 2012, DHS border and maritime R&D agencies reported producing 97 deliverables at an estimated cost of about $177 million and 29 discontinued projects at a cost of about $48 million. An R&D deliverable can yield a variety of results. For example, an R&D deliverable, such as a report or a prototype, can be provided to a customer and then transitioned into an acquisition program or further developed by that customer; delivered, but not used for various reasons, or discontinued prior to delivery. Twenty-nine projects were discontinued prior to their delivery to a customer. There were a variety of reasons that projects were discontinued, and it is important to note that the discontinuation of a project or deliverable did not necessarily mean that it was a failed R&D project. In some cases, the R&D results demonstrated that there was no technologically feasible option to address a problem or that a certain type of technology would not provide the desired solution. For example, DHS’s Office of University program officials stated that they expect to routinely discontinue projects that are not demonstrably innovative, progressing, or have no identifiable end user, and reallocate resources to new innovative projects or to projects with specified customer interest. Further, according to the officials, project discontinuation is a good outcome in many circumstances where research success cannot be foretold. These officials added that it is a necessary part of a portfolio-based research strategy. Office of University program R&D and discontinued research projects are discussed in more detail later in this report. Projects are also discontinued or merged into other projects because of a lack of available data, budget cuts, or DHS management determining that a project was no longer a priority to its potential customers. For example, S&T BMD officials stated insufficient funding resulted in the discontinuation of 2 projects, and for 2 other projects, the customer’s priorities shifted and the R&D was terminated. In addition, DNDO stated that it determined that some methods for detection of shielded nuclear material were feasible but too costly and that certain detection devices would be too large for practical use in the field, so the R&D was discontinued. Table 3 provides the costs of R&D projects with deliverables, including discontinued projects, for fiscal years 2010 through 2012. See appendix I for a list of all the R&D projects and their deliverables for fiscal years 2010 through 2012 by component or office, project type, and their associated costs. DHS R&D deliverables were wide-ranging in their cost, scope, and scale. For example, agencies reported producing deliverables ranging from the development of imaging and radar prototypes to container screening devices and written market analyses of commercially available technologies. These 97 deliverables fell into three general categories: (1) knowledge products or reports; (2) technology prototypes; and (3) software, as listed in table 4. Knowledge products or reports: Thirty-eight of the 97 deliverables (39 percent) resulted in knowledge products that contained analysis and comparison testing of technologies, summarized field testing of technologies, or developed reference materials for use by DHS components. For example, one of the DHS Centers of Excellence developed formulas and models to assist in randomizing Coast Guard patrol routes and connecting networks together to assist in the detection of small vessels. Additionally, the Coast Guard conducted a technology evaluation to help its acquisition office determine the best tactical radios for use by law enforcement boarding teams. Technology prototypes: Twenty-eight of the 97 deliverables (29 percent) resulted in technology prototypes, such as the development of new sensors, imaging equipment, or devices for detecting nuclear material. For example, S&T BMD developed prototype radar and upgraded video systems for use by Border Patrol agents and a prototype scanner to screen interior areas of small aircraft without removing panels or the aircraft skin. See figure 1 for an example of a prototype product developed by BMD for CBP. Software: Thirty-one of the 97 deliverables (32 percent) resulted in the development of software, such as algorithms used in detection systems. For example, BMD developed software that enables intelligence personnel to quickly survey large areas of ocean and find vessels of interest. Additionally, DNDO developed software that extracts data from radiation portal monitors and uses the data to improve algorithms used in detecting radioactive material. S&T BMD, the Coast Guard, and DNDO’s R&D customers had mixed views regarding the impact of the R&D products or deliverables they received. Of the 126 R&D deliverables or projects DHS completed or discontinued from fiscal years 2010 through 2012, we interviewed DHS- identified customers or other relevant officials for 33 of these (19 customers and 6 program managers of 20 S&T BMD deliverables, 8 Coast Guard customers or other relevant officials of 8 deliverables or completed projects, and 2 DNDO project managers of 5 projects. Given our scope, we discussed ongoing and completed projects managed by S&T’s Office of University Programs with Coast Guard and CBP, but did not systematically follow up with the recipients of each deliverable produced by the border and maritime related Centers of Excellence. Of the 20 S&T BMD deliverables, the customers of 7 deliverables stated that the deliverables met their office’s needs, customers of 7 did not, and customers of 4 did not know, and customers for 2 could not be identified, as detailed below in table 5. For example, customers within CBP’s Office of Technology Innovation and Acquisition reported that BMD’s analysis and test results on aircraft-based use of wide area surveillance technology helped CBP to make a decision on whether it should pursue acquiring such technology. Another customer (DHS personnel assigned to the Joint Interagency Task Force South United States Southern Command) reported that software developed by BMD to enable analysts to quickly find and characterize small maritime vessels in an image showing large areas of ocean was highly valuable and met their office’s needs. Conversely, of 20 deliverables, customers of 7 deliverables reported that the deliverable did not meet their office’s needs. In cases where customers said that the deliverables were not meeting their needs, the customers explained that budget changes, other ongoing testing efforts, or changes in mission priorities were the reasons deliverables had not met their needs, and customers pointed out that their relationship with S&T had been positive and highly collaborative. In other cases, customers pointed out that while the deliverable had not been used as intended, it informed their office’s decision making and helped to rule out certain technologies as possibilities. In this regard, the customers felt the R&D was successful, despite the fact that the deliverable had not or was not being used. Further, customers of 4 deliverables did not know or could not determine if the deliverable met their office’s needs, and customers for 2 deliverables could not be identified by S&T, CBP, or the Coast Guard. BMD officials described, for example, why some of these older projects did not have identifiable customers and also described actions it had taken to help ensure that new projects have clear, committed customers. Under S&T’s former process for initiating projects—which was carried out under S&T’s former Undersecretary and dissolved by its current Undersecretary—BMD officials said that the potential existed to engage in R&D without a clear commitment from a customer. In February 2012, S&T issued a new project management guide that requires project managers to specify the customer by office and name, and to describe customer support for the project, including how the customer has demonstrated commitment for and support of the project. BMD officials said they believed this new process would prevent future R&D funding from going towards projects without a clear customer. The Coast Guard reported producing 23 deliverables from fiscal year 2010 through fiscal year 2012, and we met with officials involved with 8 of those projects, as listed in table 6. For 4 of 8 deliverables, Coast Guard officials reported that the deliverables had met internal Coast Guard needs. For example, one customer reported using a Coast Guard report on secure tactical radio communications systems to jump-start market research and to help develop a statement of work in developing the acquisition documents for the new radios. The customer said that the Coast Guard report did a good job of defining requirements and summarizing the needs of the operational end users—in this case, Coast Guard boarding teams. Ultimately, the customer said 762 radios were acquired and end users reported the radios were a vast improvement over what they had possessed in the past. For 3 of 8 deliverables, the impact was unknown because the research was ongoing. Finally, for 1 of 8 deliverables, the customer was unknown or could not be determined. For example, for the Low-Cost Swimmer Detection System, DHS S&T was identified as the customer, but an S&T official we spoke with said that S&T was the project manager and the Coast Guard was actually the customer. Ultimately, the project did not continue due to changes in Coast Guard budget priorities. Regarding the 15 Coast Guard deliverables we did not discuss with customers, many of these were identified by the Coast Guard as deliverables but were different in nature from the deliverables discussed with S&T’s customers. Further, the nature of the customers was different, too, since in some cases the customer was the Coast Guard’s own R&D Center in order to support maintaining the R&D Center’s capabilities for conducting technological and analytical support. For instance, while many of S&T’s deliverables were prototypes or demonstrations for customers outside of S&T, the Coast Guard’s deliverables were used within the Coast Guard and included such things as the independent validation and verification of the Coast Guard’s maritime security risk analysis model, analysis support, and the Deepwater Horizon spill response. DNDO reported producing 42 deliverables—which encompassed 6 discontinued projects and 36 projects that were either transitioned to the next phase of DNDO R&D or were completed and ended from fiscal years 2010 through 2012. We met with officials involved with 5 of those projects, as listed in table 7. According to DNDO Transformational and Applied Research Directorate (TARD) officials, they consider a project completed when it results in either a prototype or a knowledge product for integration into an acquisition program. Specifically, 17 of 36 projects were part of another ongoing DNDO project or Small Business Innovative Research project and 19 of 31 projects were commercialized or concluded. DNDO R&D is different from the R&D of S&T or the Coast Guard for many reasons. For one, a DNDO project may start at a very low technology readiness level, in other words at the basic research level, but may end up being merged into other similar efforts in order to achieve a higher project goal. In these cases, the R&D customers are DNDO project managers, rather than an external DHS customer, such as CBP. We discussed 5 DNDO R&D deliverables at various R&D phases with DNDO officials—4 of which were deliverables from ongoing or completed projects and 1 of which was a discontinued project, as shown below in table 7. Given the different nature of DNDO’s R&D efforts, we discussed the outcomes of DNDO’s completed deliverables with their project managers and senior DNDO officials. DNDO TARD officials stated that their primary customers are themselves and DNDO’s Acquisitions Directorate. We also met with officials from the DNDO directorates responsible for taking early- stage R&D work and moving it toward later-stage development and acquisitions. These officials said that the early stage R&D at DNDO feeds into the prioritized ranking of gaps in the global nuclear detection architecture, as well as into the analysis-of-alternatives phase of DNDO’s solutions development process. Two of the 5 projects we discussed had moved from early-stage R&D into other projects further along in DNDO’s project management process. Two of the 5 projects were completed, with 1 project providing information that informed DNDO decision-making processes, and the other project resulting in a commercialized product. Last, with regard to the 1 discontinued project, DNDO officials said there were many lessons learned, but that the particular project’s technology was determined to be too expensive to continue pursuing. Both the Coast Guard and DNDO reported having processes in place for gathering the views of customers regarding the results of R&D deliverables. For example, the Coast Guard RDT&E Program has a process in place for surveying its customers following the completion of a project and reported using this information for future R&D planning. The Coast Guard’s survey instrument seeks feedback on the following items: customer satisfaction, timeliness, utility, and communications, among other things. The feedback step is part of the Coast Guard’s Continuous Project Process. We reviewed 5 completed surveys from Coast Guard customers. The feedback included specific suggestions for improvements in the R&D process and positive comments regarding meeting customer needs and communication. DNDO officials identified several ways in which they seek feedback from customers on the usefulness of their deliverables. For instance, in its solutions development process guide, DNDO provides direction to project managers on engaging in initial, small-quantity production of a system so that the customer can thoroughly test the system in order to gain a reasonable degree of confidence as to whether the system actually performs to the agreed upon requirements before contracts for mass production are signed. For example, during the development of the Multimodal Automated Resolution, Location, and Identification of Nuclear Material project, DNDO managers reported gaining feedback from CBP officials through their participation in the R&D, since it is CBP who will be the eventual end user of the technology. DNDO also details in its solutions development process guide how it works with customers to test fielded technology solutions, including documenting lessons learned and obtaining feedback as part of its R&D continuous development process. DNDO’s internal R&D customers (other directorates) stated that they provided feedback on DNDO’s R&D efforts through other mechanisms such as letters prioritizing technology needs and gaps. Coast Guard and DNDO officials also stated that it is not difficult to obtain feedback from their R&D customers, since their customers are generally within their own organizations. Though S&T Borders and Maritime Security project managers seek feedback during their project execution, BMD does not gather and evaluate feedback from its customers to determine the impact of its completed R&D efforts and deliverables, making it difficult to determine if the R&D is meeting customer needs. Further, in some cases, the customer of S&T’s R&D was not clear. For example, on BMD’s Wide Area Motion Imagery project, BMD officials said that CBP was the customer of this deliverable, but CBP officials we spoke with did not know who was using the results of the R&D. However, on a project level, BMD officials stated that their office prepared reports related to this project and was told that the reports were helpful in CBP’s broader consideration of options for new airborne sensor systems. In another S&T project, a Coast Guard customer identified by BMD was involved in testing the technology (the Tethered Aerostat Radio Processor) for BMD, but was not involved in the request for the R&D or in a position to make a determination on the extent to which the project met the Coast Guard’s needs. Similarly, a CBP customer identified by BMD was aware of two R&D deliverables that BMD said were transitioned to his office, but the official was unable to provide additional information on the project’s impact. As we mentioned above, S&T recently made policy changes that require project managers to specify a project’s customer by office and name and to describe customer support for the project at a project’s outset. This change should help assist S&T in seeking feedback from its customers upon completion of a project. For five projects, S&T BMD project managers and customers we met with could not provide definitive information on whether the deliverables had achieved their intended goals. For example, S&T and CBP officials agreed that R&D efforts on the Aviation Scanner project—a prototype scanner to screen interior areas of small aircraft without removing panels of the aircraft skin; however, the impact on CBP’s mission needs or its future acquisitions is unknown pending future demonstration and testing in 2013. The National Academy of Sciences have stated that evaluating the relevance and impact of R&D is a key stage of the R&D process and that measuring the impact of R&D activities requires looking to the end users and stakeholders for an evaluation of the impact of a research program, such as through polling or systematic outreach. According to S&T BMD officials, since they deal with multiple DHS components and are not within the same agencies as its customers, it is sometimes difficult to identify who the customer of the R&D is and also difficult to determine what the impact of the R&D was. S&T officials also stated that in S&T’s 2012 update to its project management guide, in its project closeout process, S&T has included a step to collect feedback from all relevant customers and a template for collecting this feedback. However, the S&T officials stated that this has not yet been carried out and that much work remains to be done to ensure this outreach and feedback is collected. BMD officials agreed that a more rigorous feedback process would provide S&T leadership a better understanding of how S&T is serving its customers and public. While S&T has developed a process and template to collect feedback at the end of each project and incorporated this into its project management plan, it does not plan to survey customers each time it provides a deliverable to the customer. As previously noted, S&T projects are often conducted over several years before they are concluded or in some cases merged into other projects. These projects also often produce multiple deliverables for a customer that meet a specific operational need. For example, the Ground Based Technologies project began in fiscal year 2006 and is slated to continue through fiscal year 2018. During this period, S&T has provided multiple R&D deliverables to CBP—including test results comparing different ground based radar systems, as previously mentioned. The National Academy of Sciences has stated that feedback from both R&D failures and successes may be communicated to stakeholders and used to modify future investments. Moreover, S&T has not established timeframes and milestones for when it will begin collecting and evaluating feedback on these projects nor stated if and when it plans to begin gathering feedback on deliverables, and incorporate it into its broader processes for setting R&D priorities and portfolios. According to A Guide to the Project Management Body of Knowledge, which provides standards for project managers, specific goals and objectives should be conceptualized, defined, and documented in the planning process, along with the appropriate steps, time frames, and milestones needed to achieve those results. Establishing time frames and milestones for collecting and evaluating feedback from its customers could help S&T better determine the usefulness and impact of its R&D projects and deliverables and make better-informed decisions regarding future work. S&T’s BMD, the Coast Guard, and DNDO reported taking a range of actions to coordinate with one another and their customers to ensure that R&D is addressing high priority needs. Officials from BMD identified several ways in which it coordinates R&D activities with its customers, which are primarily offices within CBP. Agency details and Integrated Product Teams: BMD officials reported having a person detailed to CBP’s Office of Technology Innovation and Acquisition and identified its integrated product teams, such as its cross border tunnel threat team, and jointly funded projects as ways in which the division works to ensure its R&D efforts are coordinated with CBP. Joint strategies: To improve coordination with its customers, in 2012, S&T began developing joint R&D strategic plans with various CBP offices that are designed to help ensure projects are addressing the highest-priority needs. S&T officials developed a draft strategy with the Office of Border Patrol in June 2013 and are planning throughout the rest of 2013 to develop strategies with other CBP offices, as well as a strategy with the Coast Guard in early 2014. BMD officials said that CBP’s Office of Technology Innovation and Acquisition—which aims to ensure CBP’s technology efforts are integrated across CBP and assists in managing new technology acquisitions—will be a signatory participant on all of the strategies. BMD also plans to develop broader component-level plans in the future. Presently, BMD uses letters of intent and technology transition agreements to coordinate with customers on a project-by-project level. These agreements identify specific project objectives and are detailed to an individual project. Officials from the Coast Guard’s R&D Center identified several ways in which it coordinates with its customers (which are typically other offices within the Coast Guard). Annual project cycle: Officials identified the Coast Guard’s annual project cycle as one of its primary mechanisms for coordinating with its internal customers to ensure that R&D efforts are addressing the most pressing operational needs. The annual project cycle involves the selection and ranking of its R&D portfolio extending out 2 fiscal years. To develop this portfolio, the Coast Guard has both an annual idea submission process and annual workshops, where officials develop the R&D Center project portfolio. Ideas for new projects can come from any rank or office, and R&D Center officials said that they will consider all the ideas they receive. Technology transition agreements: The Coast Guard RDT&E Program also uses internal technology transition agreements to ensure that internal Coast Guard customers and stakeholders are prepared to move forward with an R&D prototype product when delivered. These technology transition agreements are non-binding agreements, internal to the Coast Guard, for ensuring the coordination and transition of R&D products. Technology summits: In an effort to enhance awareness of R&D efforts across DHS, in February 2012, the Coast Guard hosted a joint science and technology summit wherein the Coast Guard, BMD, and S&T Office of University Programs officials gave overview briefs of their respective work. The summit discussion included such topics as the value of routine meetings between the Coast Guard’s R&D program and various S&T divisions, as well as the successes that S&T has had working with CBP and the potential for the Coast Guard to determine if it can identify and replicate successful ways to increase its interactions with S&T. DNDO also has several mechanisms for coordinating its R&D efforts that vary depending upon the maturity of the technology. For technologies that are close to deployable, DNDO’s Architecture and Plans Directorate, which does not conduct R&D, engages directly with CBP and other potential customers to identify what technology enhancements the components need and then formulates plans and recommendations for solutions. For example, DNDO officials testified in 2010 that DNDO had outfitted the Coast Guard with over 5,000 personal radiation detectors, and officials from the Architecture and Plans Directorate reported working closely with the Coast Guard to add a radiological nuclear module to the Coast Guard’s terrorism risk model. DNDO’s Transformational and Applied Research Directorate, which does conduct R&D, works with less mature technologies and therefore does not always interact directly with the operational components. While BMD, the Coast Guard, and DNDO were each taking actions to coordinate with their R&D customers, work remains to be done at the departmental level to ensure border and maritime R&D efforts are mutually reinforcing and are being directed toward the highest priority needs. We recently highlighted coordination of R&D efforts as a challenge for DHS and we made recommendations for improving coordination. We reported that S&T, which is statutorily required to coordinate R&D efforts across the department, has taken some steps to coordinate R&D efforts across DHS, but that the department’s R&D efforts are fragmented and overlapping, which increases the risk of unnecessary duplication. We recommended in September 2012 that DHS develop a description of the department’s processes and roles and responsibilities for overseeing and coordinating R&D investments and efforts. As of June 2013, DHS had not made a decision about how it specifically planned to address these recommendations. BMD officials reported in June 2013 that the directorate’s efforts would likely begin at the project and office levels (specifically through the office level strategic plans S&T’s Homeland Security Advanced Research Project Agency is developing with CBP and the Coast Guard), and from there, move to a departmental level. Until DHS has made broader determination about what policies and procedures will govern the roles and responsibilities for coordinating R&D, it will be unclear what the effectiveness of the various coordination approaches are. DHS S&T Office of University Programs officials discussed the variety of ways in which centers and DHS components collaborate and share information, and 4 of 5 center officials we met with were generally satisfied with the level of communication and collaboration between their centers and DHS. Projects at the centers have 5-year work plans that go through a mid-point review process with component-agency input where they can be reevaluated and modified. To solicit ideas for new projects, the Office of University Programs holds technical workshops with component-level subject matter experts. The component subject matter experts will discuss a technical or informational challenge they have, and as a group, the workshop participants will discuss what key research questions need to be addressed imminently. The Office of University Programs will draft these issues into more formalized research questions and put them out to the universities and centers. The office then examines proposals it receives based on how the research can further DHS’s missions. Office of University Programs officials stated that the office’s process for soliciting research topics and evaluating proposals is good and that it keeps the centers flexible. Center officials also reported collaborating on a variety of projects with non-DHS customers, such as DOD (specifically the Army Research Laboratory and the Air Force Office of Scientific Research), NOAA, the National Institute of Health, the National Science Foundation, and the Department of Justice. However, officials from DHS’s primary land border security Center of Excellence reported challenges with respect to a lack of clarity regarding protocols for access to DHS information when conducting R&D. Specifically, officials from this center reported that they have been regularly unable to obtain data from CBP to complete research it was conducting on CBP’s behalf, which resulted in delays and terminated R&D projects. These officials reported that of 4 discontinued projects and 9 completed projects, 4 projects experienced delays, incomplete data, or incorrect data. Office of University Programs staff stated that misunderstandings surrounding procedures regarding nongovernmental personnel access to data can be a challenge. DHS Office of University Programs officials said in June 2013 that they have not fully developed all of the procedures regarding sharing government data. However, the officials said that under the terms of the cooperative agreements between the Office of University Programs and the centers, CBP has no obligation to provide government-generated data of any kind. The officials said that this is because universities generally operate under the principle of publishing at will, which could limit the ability of DHS to restrict the publication of potentially sensitive information. The Office of University Programs provides several avenues for DHS components to be involved with the centers and potential projects, including writing funding opportunity announcements for centers, selecting projects to fund, reviewing and negotiating work plans, and recommending corrections. But given the challenges raised by the primary border security center, the Office of University Programs could help ensure that the approximately $3 million to $4 million a year dedicated to each university center is used more effectively by more carefully considering data needs, potential access issues, and potential data limitations with its federal partners before approving projects. We have previously stated that identifying data sources and collection procedures is one of the five key steps to an effective evaluation design. Further, we have stated that in selecting a product’s design, agencies should determine a design’s limitations as a result of the information required or the scope and methodology—such as questionable data quality or reliability, inability to access certain types of data, or security classifications or confidentiality restrictions—and to address how these limitations will affect the product. Given the challenges raised by officials from both universities leading the R&D for land border security, a more rigorous review of potential data-related challenges and limitations at the start of a project could help R&D customers (such as CBP) identify data requirements and potential limitations up front so that money is not allocated to projects that potentially cannot be completed. DHS Office of University Programs officials agreed that making sure their clients take additional steps to identify data requirements up front could help address these challenges. DHS’s R&D agencies reported regularly coordinating with the Department of Defense (DOD) and the Department of Energy (DOE) in the development of new border and maritime security technologies on both an individual project level and at a departmental level. For example, officials from S&T, the Coast Guard, and DNDO each reported having productive relationships with several DOD offices—including the Office of Naval Research; the Defense Threat Reduction Agency; the Army Night Vision Center; and the Army Research, Development, and Engineering Command. Specifically, DNDO officials said DNDO has an interagency agreement with DOD to develop long range nuclear detection devices. DNDO also has a memorandum of understanding with DOD’s Defense Threat Reduction Agency, DOE’s National Nuclear Security Administration, and the Office of the Director of National Intelligence to coordinate national nuclear detection R&D programs. Further, Coast Guard officials reported partnering with the Navy to develop unmanned aerial surveillance technologies capable of launching from the deck of a cutter and to develop systems to disable a boat’s outboard engines, as shown in figure 2. Additionally, Coast Guard, DNDO, and S&T have used the facilities at several of DOE’s national laboratories, including the Pacific Northwest National Laboratory, the Savannah River National Laboratory, and the Lawrence Livermore National Laboratory. Within S&T, the Office of National Laboratories is responsible for helping facilitate cooperative agreements between the national labs and DHS components and is to review all statements of work issued from DHS to the national laboratories. However, we reported in September 2012 that 11 DHS components had reimbursed the national laboratories for R&D between fiscal years 2010 and 2013, but the Office of National Laboratories could not provide us with any information on those activities and told us it did not track them. Instead, the Office of National Laboratories told us it used other means to monitor DHS work at the laboratories such as relationships with components and S&T, reviewing task orders sent to the laboratories from DHS, visiting laboratories, and laboratories self- reporting their work. Though it does not report conducting R&D, offices within CBP also reported coordinating directly with DOD offices—such the Joint Non- Lethal Weapons Program, Army Research, Development, and Engineering Command, and the Army’s Acquisition, Technology, and Logistics Office—in the development and testing of particular technologies. For example, CBP’s Office of Air and Marine reported visiting the Navy’s laboratory facilities to learn more about the Navy’s efforts to develop a device capable of disabling a boat’s outboard engine through a directed energy source. The Navy was conducting this research in collaboration with the Coast Guard and the officials from the Office of Air and Marine asked to be kept apprised of the project’s progress and to be allowed to participate in any testing and demonstrations of prototypes. DHS officials also reported participating with DOD on a variety of working groups, including the Air Domain Working Group and the Executive Aviation Commonality Working Group. These collaborative relationships have had benefits for DOD as well. For instance, officials from the Office of Air and Marine reported working with the Navy when it was developing new ocean surveying software for tracking vessel movements. The Navy needed a specific number of testing hours on an air platform as well as sensor operators familiar with sea search radar systems before the software could move ahead in its development. The Office of Air and Marine and the Navy agreed, via a memorandum of agreement, to adapt the Navy’s software to three different types of Office of Air and Marine aircraft. The Navy funded the nonrecurring engineering and the Office of Air and Marine paid to install and integrate the software on its aircraft. Additionally, officials from CBP reported working with DOD to conduct joint testing on a vehicle immobilization device. The officials said that DOD had the funding to do the testing, but needed vehicles, which CBP had. CBP conducted the field testing of the devices and shared the data with DOD. In a 2011 testimony, a senior DOD official said that DOD and DHS were cosponsoring a tunnel detection capability demonstration and that technologies resulting from these efforts were expected to be fielded domestically and abroad. Despite successes in the development of new security technologies, DHS and DOD have had fewer successes in the repurposing of already- existing DOD technologies for border and maritime security. CBP has an agreement with S&T to work with DOD on repurposing its technologies, such as laser scopes, radios, and surveillance equipment and DHS and DOD have tested some of these technologies in south Texas, but have identified several challenges with using the equipment. For example, the laser scopes used by DOD do not meet the eye safety requirements in place for federal law enforcement officers operating within U.S. borders. Second, radios that are developed by DOD for use in foreign countries do not always meet Federal Communications Commission requirements for use in the United States and cannot be easily reprogrammed. Additionally, use of certain surveillance aircraft used by DOD overseas is restricted in U.S. airspace and such aircraft therefore cannot be used by CBP. Officials from CBP’s Office of Field Operations said that they were offered equipment from DOD, but were unable to acquire it because it did not meet specific CBP security requirements and was not compatible with CBP’s existing operations and maintenance contracts. Specifically, Office of Field Operations officials said their office cannot service DOD’s equipment with its existing vendor service contracts, and that operations and maintenance is a major factor in whether CBP can add something new to its fleet. For instance, officials from the Office of Field Operations said their office was offered DOD small X-ray vans. The office sent officials to inspect the vans and they concluded it would have required a substantial investment on CBP’s part to get the vans into working condition and to upgrade them so that they could be integrated into CBP’s existing fleet. DHS and DOD officials indicated that they would continue to work closely together to evaluate opportunities for integrating DOD equipment into DHS’s homeland security efforts. S&T, the Coast Guard, and DNDO coordinate with the private sector on a project level, as well as through conferences, industry days, and workshops. S&T refers to its external coordination efforts as technology foraging and it identified several ways in which it coordinates its border and maritime R&D efforts with the private sector. The six BMD program managers we spoke with—who were responsible for managing projects resulting in 18 R&D deliverables and 3 of BMD’s current portfolios—said that at the start of every new project, they canvass industry experts to gather information on the current state of the art technologies, to gain expertise, and to identify where DHS’s R&D efforts would be most beneficial. For example, as part of its project to research technologies to detect cross-border underground tunnels, BMD reached out to officials in the mining and oil industries to discuss their respective areas of expertise in using underground sensor systems. From this, BMD determined that the currently available technologies were not ideal for the type of detection capabilities DHS needed and used this information in developing a more effective sensor system for DHS. The Coast Guard also identified several avenues through which it coordinates with the private sector. For instance, the Coast Guard’s RDT&E Program uses broad agency announcements to survey industry to learn what technologies are already available. The Coast Guard also uses cooperative research and development agreements under the Technology Transfer Act to partner with industry on R&D projects and in July 2013 reported having six such agreements underway. Further, since 2000, the Coast Guard RDT&E Program has participated in the Coast Guard Innovation Exposition, which has assisted in informing Coast Guard decision makers about research that is completed, underway, or planned. The Innovation Exposition was designed to provide a forum to exchange ideas and collaborate within the Coast Guard and with its government, industry, and academic partners. Because of budget constraints, the Coast Guard suspended the exposition beyond 2011 and has stated that it is reevaluating the goals and outcomes of the expositions in terms of their cost and benefits. DNDO reported engaging in similar forms of private industry outreach. For instance, DNDO hosts an annual industry day that is attended by officials from industry, academia, national laboratories, and others. As part of the industry day, DNDO reported collaborating with the private sector to discuss ways to enhance existing radiation detection devices and develop new technologies that will meet the needs of federal, state, and local law enforcement officials through programs such as the Commercial First initiative and the Graduated Radiation and Nuclear Detector Evaluation and Reporting program. DNDO officials reported signing an industry engagement policy in April 2013 which stated how DNDO plans to meet directly with vendors to learn about new technologies. The policy provides guidance and a standard operating procedure for DNDO employees on holding structured meetings with vendors and exchanging information when researching commercially available technologies. It also identifies fundamental procurement principles, and provides guidance for meeting with vendors and industry representatives. Beyond its project-level outreach, DHS S&T officials identified what they refer to as technology foraging as the directorate’s approach for identifying and adapting already-available technologies in the private sector. In a draft June 2013 document, S&T stated that technology foraging is a formal and structured method for identifying technologies and research and that project managers should use this knowledge to, in part, conduct more informed project planning. To help integrate technology foraging into its regular project management, S&T established a Technology Foraging Office within its Research and Development Partnerships Group. S&T officials reported that staff from this office is available to provide technology foraging services to S&T project managers upon request. For example, in July 2013, S&T reported leveraging a coastal-weather radar system at the National Oceanic and Atmospheric Administration (NOAA) to supplement software S&T developed that lets the Coast Guard sweep a bay with radar as a way to track small anonymous boats or other vessels that transport drugs or other illegal contraband. Further, S&T invested $11 million in a private, not-for-profit strategic investment firm designed to coordinate advances in commercial technologies with the needs of the U.S. intelligence and security communities. S&T officials reported that this investment was designed to support technology foraging efforts. Securing the nation’s land borders and waterways is a complicated undertaking requiring many elements, including effective and coordinated R&D programs. Border and maritime R&D efforts at DHS in recent years have resulted in dozens of deliverables to various DHS components. But these products have had varying levels of impact on DHS’s ability to acquire new security technologies or to advance its homeland security missions. Establishing timeframes and milestones for collecting and evaluating feedback from its customers on the usefulness and impact of the R&D projects and deliverables they receive could help S&T ensure that the technologies being developed and delivered to the Coast Guard, Customs and Border Protection, U.S. Immigration and Customs Enforcement, and other DHS components are meeting customer needs and achieving their intended goals. Further, DHS has made progress leveraging the expertise and resources of academia through its Centers of Excellence, but has faced cancelled and delayed projects in some areas because of a lack of data from DHS. By ensuring that potential challenges and limitations with regard to data quality, accessibility, and availability are reviewed and understood prior to approving projects, DHS can help ensure that it focuses its resources on those projects that are better positioned for success. To help ensure that DHS effectively manages and coordinates its border and maritime R&D efforts, we recommend that the Secretary of Homeland Security instruct the Under Secretary for Science and Technology to: establish timeframes and milestones for collecting and evaluating feedback from its customers to determine the usefulness and impact of its R&D projects and deliverables, and use it to make better- informed decisions regarding future work, and ensure that design limitations with regard to data reliability, accessibility, and availability are reviewed and understood before approving Center of Excellence R&D projects. We provided a draft of this report to DHS for its review and comment. DHS provided written comments, which are reproduced in full in appendix II, and concurred with our recommendations. DHS also described actions it plans to take to address the recommendations. Specifically, according to DHS, all DHS S&T project plans will be modified to require formalized feedback from its R&D customers at key project milestones, such as testing or transition of a deliverable. Further, to improve its consideration of potential data needs, access issues, and data limitations, DHS S&T plans to develop standard guidelines and protocols for all centers of excellence, which would describe, for example, how data sets must be modified to enable use in open-source, unrestricted research formats. DHS S&T also plans to encourage centers to voluntarily convene workshops to engage both researchers and DHS components to better understand constraints and to develop protocols for acquiring and using government data. DHS plans to complete these efforts by September 30, 2014. Such actions should address the overall intent of our recommendations. DHS also provided written technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Homeland Security, appropriate congressional committees, and other interested parties. This report is also available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9627 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Department of Homeland Security (DHS) Border and Maritime List of Completed and Discontinued Research and Development (R&D) Project Deliverables for Fiscal Years 2010 through 2012 Project description Study of available interventions (methods and tools) for use by rescuers or good Samaritans in maritime mass rescue incidents. The Coast Guard seeks a more economical and efficient solution for active aids to navigation than the current radar beacons. The command center watchstander is burdened with increasing quantities of maritime domain sensors and situational information. The federal on scene commander requires investigation of various concepts/technologies to support spill response efforts with respect to the Deepwater Horizon oil spill. The Coast Guard seeks to improve understanding of how technologies worked in Deepwater Horizon and what opportunities for improvement should be pursued. The Coast Guard, DHS and Department of Defense (DOD) seek to improve capability to non- lethally stop a non compliant large vessel. The Coast Guard seeks to evaluate function of first responders in pre-selected new WMD personal protective equipment. The Coast Guard law enforcement and intelligence communities lack single point of entry access to federal databases to support search and enroll during at sea biometrics. Fast-paced proof of concept requires operational support to ensure the proper operation of prototype systems, adequate training, train the trainer, and collection of operational metrics. Coast Guard boarding teams seek to improve reliability of voice and data communications between boarded vessels and support commands. Coast Guard tactical teams lack a diversionary device with the effects of conventional flashbangs without incurring secondary threats from excessive smoke or possible fire. The Coast Guard seeks to improve current capabilities to detect contraband in hidden compartments. The Coast Guard seeks low-cost underwater threat detection systems for protection of critical infrastructure. The information on current barrier technology countermeasures is not sufficient for decision making on their value in protecting High Value maritime assets from waterborne threats. Coast Guard mission analysis reports guide future acquisition and mission development needs in the Arctic and Antarctic high latitude regions. No current summary of information is available to support decision makers with regards to the requirements, options, and cost of sustainment of the Coast Guard Polar icebreaking capability. The Coast Guard seeks to improve the ultra-high frequency radio communications capability introduced by the Rescue 21 system, affecting intra-Coast Guard communications as well as Coast Guard communications with other government agencies and port partners. The Coast Guard seeks to improve the ability to track/identify its own assets and people automatically during various mission conditions. The Coast Guard seeks to develop fiscally constrained fleet mixes to support leadership decision making. The Coast Guard lacks sensor performance data needed as input to simulation models used to support acquisitions and future force mix decisions. Coast Guard seeks to improve its understanding of how land-based unmanned aerial system can be operated to support its missions. The Coast Guard needs improved capability to effectively collect and correlate data and/or reduce data corruption from multiple inputs. Independent assessment, field test and report The Coast Guard provided the product type. Guard’s Research Development Test and Evaluation Program portfolio is funded with a Research Development Test and Evaluation, Acquisitions, Construction & Improvement, or Operating Expenses appropriation. DHS S&T was the project sponsor and Coast Guard R&D Center was the project executor. Prototype of a remotely operated mechanical vehicle stopping device. Asset tracking capability that employs existing P25 radio infrastructure. SIMON is an integration platform for data-producing systems, and an integration process for streamlining the modernization and modularization of new and existing systems. A real-time sensor integration and distribution system. SMS is used to incorporate local and regional sensor data to help enhance track picture at command centers. Prototype riverine airboat system that provides ballistic protection to operators. Use participating maritime vessels as “additional eyes” to help detect and track other boats by exfiltrating vessel’s on-board radar data and relaying received automatic identification system messages via satellite link to a ground node. Provides communications of an unauthorized container door opening or tampering detection, along with tracking information, to a central data collection system. Provides warning of unauthorized container door opening or tampering. Provide CBP’s Office of Technology Innovation and Acquisition (OTIA) with performance data and suitability information for commercial off-the the shelf (COTS) radars that could be used on towers or Mobile Surveillance Systems for border surveillance. To be utilized for requirements development and procurement specifications. Report delivered to CBP and the Coast Guard on test and evaluation of the system in the Tucson Sector, to assist future acquisition assessment. Report delivered to CBP and the Coast Guard on test and evaluation of the system in the Tucson Sector, to assist future acquisition assessment. Report delivered to CBP and the Coast Guard on test and evaluation of the system in the Tucson Sector, to assist future acquisition assessment. Prototype system to examine the interior areas of light aircraft without removing panels or aircraft skin/covers. Software application that enables operators to quickly survey large areas of ocean and find vessels of interest. Track low observable targets. Technology to “sniff” maritime containers for materials of interest. The test bed is a spin-off being used by community for realistic testing of new cargo security technology. Defined business process and user interface requirements for future operational requirements document Provides for the detection, tracking and reporting on vessel activity in the Mona Passage (strait between Puerto Rico and Hispaniola). Retrofit kit to enhance radar, imager, and graphical user interface/mapping for CBP’s current MSS units. Research into the use of biological control agent to eliminate non-native weeds that create a security hazard along the southwest border. Included quarantine, rearing, and field deployment studies. The US Department of Agriculture is funding scale-up and deployment. Development of a portal-less monitor, the Roadside Tracker integrating both video and gamma ray imagining technology. Combine gamma and neutron measurements with non- radiation and contextual information to vastly improve threat/non-threat discrimination and radiation alarm resolution through the use of training tables and machine learning. Develop and implement several key physics and algorithm enhancements in areas where the code was lacking, improvements in evaluated data and benchmark measurements for the MCNP/MCNPX Monte Carlo codes for active and passive detection systems. Development of an object-oriented and vertically- integrated radiation transport simulation and analysis system and provides an end-to-end environment for simulating gamma-ray background, nuisance sources, and targets of interest. Development of a muon radiography proof of concept prototype that utilizes a gas electron multiplier detector, promising shorter interrogation time and better spatial resolution compared with more traditional passive portal systems that detect dense objects such as special nuclear material. Development of a stacked array of cesium iodide panel detectors for significant improvement in detection capability for both contraband and special nuclear material at the required scan throughput. Production of receiver/operator characteristic curves and analytical determination of the amount of, and the reasons for, improvements of decisions based on fused data rather than simple Boolean combinations of decisions. Project description Demonstrate the ability to develop and deploy new detector concepts with fully integrated signal and information analysis to attain breakthrough improvements in the nation’s ability to detect domestic nuclear threats through the smuggled HEU interdiction through enhanced analysis and detection framework, which comprises four research teams (areas) covering: detectors, systems analysis, radiation transport and inversion, and a social science and policy. Development of interdiction models, solution algorithms, insights and specific recommendations for optimally prioritizing sites for locating radiation detectors to thwart an intelligent and informed smuggler of nuclear material over a global transportation network, and to develop tools for rapid computation of physics-based detection probabilities that parametrically address a range of detectors, types of special nuclear material, and local conditions that in concert determine detection probabilities. . Explore how a systems approach can be used to design and analyze systems for detecting nuclear material at our nation’s ports. The team developed a risk-based framework for screening cargo containers for nuclear material and developed an expert judgment tool to rank nuclear threat risk levels for incoming vessels. Mobile passive detection systems with a goal of detection at a distance, 1mC at 100meters. The unit has a backplane made of cesium iodide logs and a passive mask/antimask on either side enabling dual sided coded aperture imaging. Mobile passive detection systems with a goal of detection at a distance, 1mC at 100m. The unit uses an array of high purity germanium (HPGe) detectors for high- resolution spectral triggering of the sodium iodide based coded aperture imager. Mobile passive detection systems with a goal of detection at a distance, 1mC at 100m. The imager has a 2- dimensional active mask and backplane constructed from sodium iodide to provide coded aperture and Compton imaging capability. Development of a proof of concept system that integrated both gamma detection and muon tomography for the detection of radiological/nuclear material. Demonstrate the benefit of fusing target tracking data directly into radiation imaging algorithms by building a target-linked radiation Imaging system that integrates a state-of-the-art video tracking system with advanced cadmium zinc telluride detector technology. Project description Development of an active interrogation system for scanning cargo containers with a 9 MeV CW bremsstrahlung photon source capable of detecting high-Z utilizing prompt neutrons from photofission to detect the presence of fissionable material, and nuclear resonance fluorescence to identify isotopic content. Development of an image processing algorithm to determine the most likely locations of contraband in radiographic images taken by nonintrusive inspection systems of cargo containers. Development of pulsed neutron generator using the deuterium-deuterium reaction with a high average yield and pulse lengths varying from 100 microseconds to 2 microsecond with a fall time of less than 1 microsecond. Demonstrate the advanced technologies required to improve the ability to detect, localize, and identify radiological sources by integrating data from multiple portable radiation detectors. In the system, small networked detectors transmit radiation data and their location to a base station, which fuses data from all detectors and determines if detection has occurred. If a source has been detected, the system then uses the available information to locate and identify the source. This information is transmitted back to the user(s) in the array. The system takes into account directional detectors. Demonstrate the advanced technologies required to improve the ability to detect, localize, and identify radiological sources by integrating data from multiple portable radiation detectors. In this system, small networked detectors coupled with a smart phone processor system. The smart phone is used to gather data from nearby neighbors and perform a local particle filter calculation. These results are passed to the user and the neighboring detectors. If a base station is employed, the results are sent to a base station. Demonstrate the advanced technologies required to improve the ability to detect, localize, and identify radiological sources by integrating data from multiple portable radiation detectors. In this system, small networked detectors transmit radiation data and their location to a base station, which uses a number of algorithms to determine if detection has occurred. Their primary algorithm is a particle filter. If detection has occurred, the system attempts to location and identify the source using a series of algorithms. This system employs an ultra wide band networking method that allows 3- dimensional positioning in a Global Positioning System- denied environment. Project description Container Security Initiative will take an existing simulation development tool, assess the feasibility of creating two virtual working detectors, create a new web-based interface where the detectors can be virtually controlled, and establish the web-delivery and administration software specifications for this new approach to training on these instruments. Propose an innovative approach that brings all of these elements together to develop simulation software that provides physically realistic and effective preventative radiation/nuclear detection training to first responders by first adding radiation transport algorithms to an existing video game engine which will be used to generate training scenarios based on real locations, and then testing the accuracy of those simulations by comparing the virtual environment with data collected from real world measurements. Development and characterization of portal-sized neutron detector modules consisting of layers of 6LiF/ZnS scintillating materials and wavelength shifting fibers. Development of a next generation X-ray generator that can deliver photon energies of 3, 6, and 9 MeV in discrete and interleaved modes at a repetition rate of 800-1000 Hertz in the x-band microwave frequency. Development of portal monitor replacements using boron- coated straw proportional counters embedded in a moderator. Develop and demonstrate novel radiological background characterization approaches that will improve the detection capability of both fixed, passive ASP systems, as well as handheld and moviel isotope identifiers. Capability will be directly demonstrated by integrating these algorithms with RadSeeker detector technology. These algorithms will result in the ability to perform clutter suppression yielding de-noised gamma-ray signatures of significantly higher accuracy needed for detection and identification of low activity threats. Development of large diameter boron-coated straws embedded in a moderator as a cost-effective replacement for helium-3 in radiation portal monitors. Project description Development of a smart phone application that DNDO calls RadMATE. The vision for RadMATE is to be available to operators in the field primarily to simplify and expedite Reachback, but additionally to provide needed information (e.g. tables to classify sources as innocent or of major concern) and operating procedures when a radiological source is encountered in the field. This will considerably minimize the burden on operators and eliminate the need for a specialized laptop computer with custom Reachback software. It will also ensure that Reachback communications are consistent, complete and more accurate. Development of a Smartphone application for radiological threat adjudication to support law enforcement and first responder adjudication of anomalous gamma ray spectra collected on handheld or personal radioisotope identification and spectroscopic personal radiation detector devices. Design, build, test, and evaluate an engineering prototype neutron generator for Am-Be replacement that is to scale and function to simulate ruggedness and suitability for borehole applications. Research and development to design, build, and demonstrate use of a portable, cost efficient electron accelerator for non-intrusive inspection and verification applications. Idea is to apply large-scale optimization algorithms (branch and bound) to mixture analysis, use a Bayesian network to incorporate expert knowledge into optimization algorithms, and demonstrate measurable performance gains in RN ID for various detector materials. Developing a set of algorithms to locate high-Z material in radiography images Study investigating the benefits of fast vs. thermal neutron signatures for detection of SNM in passive applications; will explore different types of detectors (high pressure He- 4, single crystal organic scintillators, and plastics), sources (Cf-252, gamma), backgrounds, and cargos (represented by moderators). The Impact on the U.S. economy of changes in wait times at ports of entry. Analytical method to identify the number of containers to inspect at U.S. ports to deter terrorist attacks. Deterring the smuggling of nuclear weapons in container freight through detection and retaliation. This rapid response project adapted the center’s storm surge models to forecast oil landfall during the Gulf Coast Horizon Oil Spill. The work was coordinated with Federal Emergency Management Agency, the National Oceanic and Atmospheric Administration, and the U.S. Army Corps of Engineers. Coast Guard Search and Rescue Visual Analytics– interactive resource allocation tool for search and rescue missions and stations. Boat Allocation Module—operations research resource allocation capability for Coast Guard boat stations. Risk analysis for maritime traffic in Delaware River. Report Targeted risk-based decision support tool for Coast Guard operations. This study has sought to develop new statistics that will significantly improve understanding of both the level of legal and illegal immigration through the United States. and the specific demographic attributes of these individuals, and thus enable better informed decision making regarding public services usage, criminal activity, and immigrant assimilation. Provides Office of Border Patrol recommendations for improved data collection, performance metrics, community impact assessment and resource allocation tools. The project’s goal is to recommend a ―standard for border enforcement effectiveness based on a review of existing research, interviews with Border Patrol and other DHS personnel, and an assessment of the available data. The two primary objectives of this study include:(1) further understanding the organizational structure and sophistication of transnational criminal gangs and their capacity to facilitate mobility and migration through Mexico into the United States.; and (2) further understanding the dynamic social networks of transnational criminal gangs and their capacity to facilitate mobility and migration through Mexico into the United States. The methods and tools offer approaches to allocating border security assets based on criteria of risk. In this project, we asked: what risk-based resource allocation approaches are most effective and how does effectiveness depend on the operational environment. We have produced near-real-time sea ice imagery and sea ice concentration products. These products are critical aids to operating ships safely in regions where sea ice (and potentially marine mammals) represent hazards. Analysis of imagery and development of software for sensor information and anomaly detections. Software for constellation optimization was developed and evaluated. Randomized inspection scheduling tool for Coast Guard surveillance operations. This project was co-funded between OUP grant monies and RDC RDT&E funding. RDC is the USCG Project Manager and obtained support from OUP COEs using Basic Ordering Agreement task orders to develop and test software prototype. OUP contribution to software development is complete. However, USCG PROTECT software development and testing is ongoing. Data not provided by DHS at sector/headquarters level – Principle Investigator (PI) did not make progress in finding alternate testing data. Discontinued by Center for Island, Maritime, and Extreme Environment Security (CIMES) Director when antennas seemed unlikely candidates for transition to stakeholders. Reason for discontinuation not provided Due to the nature of funding of R&D and grant funding, individual project costs were not readily available. In addition to the contact named above, Chris Currie (Assistant Director), Aditi Archer, Charlotte Gamble, and Gary Malavenda made key contributions to this report, and Michele Fejfar, Robert Fletcher, and Richard Hung assisted with design and methodology. Frances Cook provided legal support. Jessica Orr and Eric Hauswirth provided assistance in report preparation.
Conducting border and maritime R&D to develop technologies for detecting, preventing, and mitigating terrorist threats is vital to enhancing the security of the nation. S&T, the Coast Guard, and DNDO conduct these R&D activities and S&T has responsibility for coordinating and integrating R&D activities across DHS. The Centers of Excellence are a network of university R&D centers that provide DHS with tools, expertise, and access to research facilities and laboratories, among other things. GAO was asked to review DHS's border and maritime R&D efforts. This report addresses (1) the results of DHS border and maritime security R&D efforts and the extent to which DHS has obtained and evaluated feedback on these efforts, and (2) the extent that DHS coordinates its border and maritime R&D efforts internally and externally with other federal agencies and the private sector. GAO reviewed completed and ongoing R&D project information and documentation from fiscal years 2010 through 2013 and interviewed DHS component officials, among other actions. Between fiscal years 2010 and 2012, the Department of Homeland Security's (DHS) border and maritime research and development (R&D) components reported producing 97 R&D deliverables at an estimated cost of $177 million. The type of border and maritime R&D deliverables produced by DHS's Science and Technology (S&T) Directorate, the Coast Guard, and the Domestic Nuclear Detection Office (DNDO) varied, and R&D customers we met with reported mixed views on the impact of the R&D deliverables they received. These deliverables were wide-ranging in their cost and scale, and included knowledge products and reports, technology prototypes, and software. The Coast Guard and DNDO reported having processes in place to collect and evaluate feedback from its customers regarding the results of R&D deliverables. However, S&T has not established timeframes and milestones for collecting and evaluating feedback from its customers on the extent to which the deliverables it provides to DHS components--such as US Customs and Border Protection (CBP)--are meeting its customer's needs. Doing so could help S&T better determine the usefulness and impact of its R&D projects and deliverables and make better-informed decisions regarding future work. DHS has taken actions and is working to develop departmental policies to better define and coordinate R&D, but additional actions could strengthen internal and external coordination of border and maritime R&D. S&T's Borders and Maritime Security Division, the Coast Guard, and DNDO reported taking a range of actions to coordinate with their internal DHS customers to ensure, among other things, that R&D is addressing high priority needs. However, work remains to be done at the agency level to ensure border and maritime R&D efforts are mutually reinforcing and are being directed towards the highest priority needs. For example, officials from university centers of excellence reported difficulties in determining DHS headquarters contacts, and officials from the primary land-border security R&D center reported delayed and cancelled projects due to the inability to obtain data. DHS could help ensure that the approximately $3 million to $4 million a year dedicated to the university Centers of Excellence is used more effectively by more carefully considering potential challenges with regard to data needs, access issues and data limitations before approving projects. GAO recommends that DHS S&T establish timeframes and milestones for collecting and evaluating feedback from its customers to determine the usefulness and impact of its R&D efforts, and ensure that potential challenges with regard to data reliability, accessibility, and availability are reviewed and understood before approving Centers of Excellence R&D projects. DHS concurred with GAO's recommendations.
The National Park Service Organic Act of 1916 established the Park Service within the Department of the Interior to promote and regulate the use of the National Park System with the purpose of conserving the scenery, natural and historic objects, and wildlife therein and to leave them unimpaired for the enjoyment of future generations. Yellowstone National Park in Wyoming was the first national park, established in 1872, and the most recent as of this report—Katahdin Woods and Waters National Monument in Maine—was established August 24, 2016. The Park Service manages its responsibilities through its headquarters office located in Washington, D.C., seven regional offices, and 413 individual park units that are part of the system. Figure 1 shows the geographic areas that make up the Park Service’s seven regions. Park unit types include national scenic parks, such as Yellowstone and Great Smoky Mountains; national historical parks, such as Valley Forge, in Pennsylvania, and Lewis and Clark, in Oregon; national battlefields, such as Wilson’s Creek, in Missouri, and Fort Donelson, in Tennessee; national historic sites, such as Fort Bowie, in Arizona, and Theodore Roosevelt’s birthplace, in New York; national monuments, such as Muir Woods, in California, and Tule Springs Fossil Beds, in Nevada; national preserves, such as the Yukon-Charley Rivers, in Alaska, and Big Cypress, in Florida; national recreation areas, such as Lake Meredith, in Texas, and Whiskeytown, in California; and national lakeshores, such as Sleeping Bear Dunes, in Michigan, and the Apostle Islands, in Wisconsin. Visitation levels reached an all-time high in 2015, when more than 307 million people visited park units. This is an increase of more than 14 million visitors from 2014. Park Service officials said that they expect visitation to rise again in 2016, with the celebration of the Park Service centennial. The Park Service generally receives funding through annual appropriations acts. These appropriated funds include base funding for the operation of park units and for Park Service-wide programs, such as funding for visitor services, park protection, and maintenance projects. It also includes funding for technical and financial assistance programs that support resource preservation and recreation outside of the national park system. The Park Service also collects and uses funds from fees, donations, and other funding sources. Total funding for the Park Service increased about $0.6 billion, or 22 percent, from $2.65 billion in fiscal year 2006 to nearly $3.25 billion in fiscal year 2015. However, when adjusted for inflation, total funding for the Park Service increased by only $160 million in fiscal year 2015 dollars, or 5 percent, during this 10-year period. During this time, the number of park units in the system has grown from 390 in 2006 to 413 as of October 2016. Some Park Service officials said that this increase in park units meant that the agency’s appropriations had to be divided among an increasing number of units. The Park Service defines deferred maintenance as maintenance that was not performed when it should have been or was scheduled to be and is delayed for a future period. Deferred maintenance includes maintenance within national park units as well as maintenance related to other properties under Park Service jurisdiction, such as Park Service regional offices. According to the Park Service, maintenance funding has not kept pace with agency needs for several years. In general, maintenance needs are almost double the annual funding, which leads to an annual increase in deferred maintenance. As maintenance work is identified and is not completed because of limited resources, deferred maintenance increases. The Park Service defines an asset as real property that the agency tracks and manages as a distinct identifiable entity. These entities may be physical structures or groupings of structures, landscapes, or other tangible properties that have a specific service or function, such as cemeteries, campgrounds, marinas, or sewage treatment plants. Maintenance can range from work needed for visible assets, such as buildings, roads, and trails, to less visible needs, such a water and sewage systems. Many of these assets were constructed decades or hundreds of years ago. For example, the walls lining Skyline Drive in Shenandoah National Park, in Virginia, and some of the park’s buildings were constructed by the Civilian Conservation Corps, a program created in 1933 by President Franklin Roosevelt to help generate jobs and improve the condition of the country’s natural resources. A number of Park Service facilities date back half a century to the Mission 66 program. From 1956 through 1966, Congress appropriated more than $1 billion for Mission 66 improvements, which included updated facilities for hundreds of visitor centers and employee residences, as well as employee training centers at Harpers Ferry, West Virginia, and the Grand Canyon, in Arizona. Many of the structures built through these programs as well as other efforts are coming to the end of their anticipated life spans and are in need of rehabilitation, repair, replacement, or disposal, according to various documents we reviewed. In 1998, we examined the Park Service’s deferred maintenance and found, among other things, that the agency did not have an accurate estimate of its total deferred maintenance and a means for tracking progress so that it can determine the extent to which its needs are being met. We also found that the Park Service was beginning a number of initiatives to better manage its maintenance and construction program, including developing a plan to prioritize projects. Specifically, in 1998, the Park Service began designing a new asset management process that among other things was to provide the agency with a systematic method for documenting deferred maintenance needs and tracking progress in reducing the amount of deferred maintenance. In 2003, we testified that the Park Service had made progress in developing its new asset management process. In February 2004, Executive Order 13327 recognized the need to promote the efficient and economical use of federal real property assets. The order directed federal agencies to develop and implement asset management planning processes and develop asset management plans. The order also created the Federal Real Property Council and directed the council to develop guidance for each agency’s asset management plan, among other things. In developing asset management plans, the agencies were directed to take several actions, including identifying and categorizing all of their assets and prioritizing actions to improve the operational and financial management of these assets. In fiscal years 2006 through 2015, the Park Service allocated $1.16 billion on average to operate and maintain the agency’s assets. Most recently, in fiscal year 2015, the Park Service allocated $1.08 billion to maintenance, which was about one-third of the total funding the agency received that year. As shown in figure 2, the Park Service’s annual maintenance allocations varied little during this period except in fiscal year 2009, when the agency also used American Recovery and Reinvestment Act funds to carry out maintenance work. The Park Service allocates funds for maintenance in four broad budget categories—operations, construction, recreation fees, and transportation—according to what the agency refers to as fund sources, which generally describe the type of maintenance work being done with those funds. Figure 3 shows that maintenance allocations to operations, recreation fees, and transportation have remained fairly stable in fiscal years 2006 through 2015, with the exception of 2009, when the agency also used American Recovery and Reinvestment Act funds to carry out maintenance work. As shown in table 1, the Park Service uses eight fund sources within these broad budget categories to track allocations for different types of maintenance. The types of projects eligible for different fund sources vary. For example, cyclic maintenance projects are preventive in nature, that is, projects intended to prevent growth of deferred maintenance. Officials at Manassas National Battlefield Park said that they received cyclic maintenance funds to do maintenance work on the roofs of several historic buildings, which helps prevent further damage to the interior walls, ceilings, and other parts of the buildings. Line item construction projects are generally larger in scope and expense. For example, an official at Yellowstone National Park said that in January 2016 park staff had proposed relocating a dormitory near the Old Faithful geyser to a safer location away from the release of harmful gases, at an estimated cost of $9.9 million. As shown in figure 4, the Park Service allocated the largest amounts of funds to facility operations ($328.9 million, or 30.3 percent) and transportation ($240 million, or 22.1 percent) in fiscal year 2015, and less than $100 million to the other types of maintenance work described by the remaining six fund sources. Table 2 shows that allocations have generally increased from fiscal year 2006 through 2015 for such fund sources as facility operations and recreation fees for routine maintenance. In contrast, allocations have generally decreased for other fund sources, such as line item construction, for the same time period. The Park Service’s deferred maintenance averaged about $11.3 billion from fiscal year 2009 through fiscal year 2015. In each of those years, deferred maintenance for paved roads made up the largest share of the agency’s deferred maintenance. The sum of deferred maintenance for assets in the other categories used by the Park Service generally declined from fiscal year 2009 through fiscal year 2015. Also, in fiscal year 2015, deferred maintenance varied broadly among other characteristics, such as asset priority, category of asset, when the park unit was established, and region. The Park Service’s deferred maintenance averaged about $11.3 billion in nominal dollars from fiscal year 2009 through fiscal year 2015. During that time, deferred maintenance in nominal dollars generally increased from about $10.2 billion in fiscal year 2009 to about $11.9 billion in fiscal year 2015, as shown in figure 5. Overall, the Park Service’s deferred maintenance in nominal dollars grew, on average, about 3 percent per year from fiscal year 2009 through fiscal year 2015. The Park Service reported that it had a portfolio of 75,526 assets at the end of fiscal year 2015, which the agency has organized into the following nine categories: Buildings, which includes structures such as visitor centers, offices, and comfort stations. Campgrounds. Housing, which includes Park Service and Department of the Interior employee housing and associated buildings, such as detached garages, shower and laundry facilities, and storage. Paved roads, which includes bridges, tunnels, paved parking areas, and paved roadways. Trails, which includes hiking trails. Unpaved roads, which includes unpaved parking areas and unpaved roadways. Water systems, which includes potable and nonpotable water systems. Waste water systems, which includes structures such as sanitary sewers and stormwater systems. All others, which includes other utility systems, dams, constructed waterways, marinas, aviation systems, railroads, ships, monuments, fortifications, towers, interpretive media, and amphitheaters, and other structures that did not fall into the other eight asset categories. Deferred maintenance for paved roads was consistently the largest category of the Park Service’s deferred maintenance from fiscal year 2009 through fiscal year 2015. On average, deferred maintenance for paved roads made up about 44 percent of the Park Service’s total deferred maintenance from fiscal year 2009 to fiscal year 2015 in both nominal and inflation-adjusted dollars, and it generally grew—from about $3.4 billion in fiscal year 2009 to about $6.0 billion in fiscal year 2015 (or, from $3.8 billion to $6.0 billion in fiscal year 2015 dollars). Overall, the sum of deferred maintenance for assets in the other eight categories generally declined—from about $6.8 billion to about $6.0 billion from fiscal year 2009 through fiscal year 2015 (or, from about $7.4 billion to about $6.0 billion in fiscal year 2015 dollars). However, within this group, deferred maintenance for some asset categories increased over the period. For example, deferred maintenance for water systems generally increased—from about $330 million in fiscal year 2009 to about $422 million in fiscal year 2015 (or, from about $361 million to $422 million in fiscal year 2015 dollars). Figure 6 shows the amount of deferred maintenance for each asset category over this period. The Park Service’s $11.9 billion in deferred maintenance in fiscal year 2015 varied by priority, asset category, park age, and region. About 20 percent ($2.4 billion) of the agency’s deferred maintenance in fiscal year 2015 was for what the Park Service identified as its highest priority, non-transportation assets. According to Park Service documents, the agency’s highest-priority assets are those that are critical to the operations and missions of their respective park units or have high visitor use. For example, the Park Service has identified a potable water distribution system at Grand Canyon National Park and a seawall at West Potomac Park located in the National Mall in Washington, D.C., as among the agency’s highest priority, non-transportation assets with some of the largest amounts of deferred maintenance—both with more than $50 million for fiscal year 2015. Nearly $6 billion (about 50 percent) of the Park Service’s deferred maintenance in fiscal year 2015 was associated with paved roads. As shown in figure 7, the all others category was the next largest in terms of the dollar amount of deferred maintenance, at about $2.4 billion (about 20 percent). In terms of the number of assets, buildings was the largest category in fiscal year 2015, accounting for about 25,000 of the Park Service’s more than 75,000 assets (about 33 percent), followed by all others with about 18,000 assets (about 24 percent) and paved roads with about 12,000 assets (about 16 percent), as shown in figure 8. The majority of the Park Service’s deferred maintenance in fiscal year 2015 was for assets in park units that were established more than 40 years ago. Specifically, about $10.5 billion in deferred maintenance was for park units established more than 40 years ago. Of these, park units established more than 100 years ago had the greatest amount of fiscal year 2015 deferred maintenance—more than $3.8 billion—as shown in figure 9. This includes parks such as the National Mall in Washington, D.C., with about $840 million; Yellowstone National Park, in Idaho, Montana, and Wyoming, with about $632 million; and Yosemite National Park, in California, with about $555 million. For assets in parks established in the last 40 years, deferred maintenance in fiscal year 2015 was about $1.0 billion. See appendix II for a listing of the top 100 park units in terms of deferred maintenance amounts. About $2.7 billion of the agency’s deferred maintenance is associated with parks located in the Pacific West Region. As shown in figure 10, four Park Service regions each had more than $1.8 billion in deferred maintenance, while the Midwest Region ($434 million) and the Alaska Region ($115 million) each had deferred maintenance well below $1 billion. The Park Service uses several tools to rate an asset’s importance and condition and assign maintenance priority to its assets. Park unit staff update asset condition information through periodic assessments and use that information to create work orders to address identified deficiencies, but they face some challenges in completing these tasks. Park unit staff combine these work orders each year to generate projects to address the deficiencies and identify fund sources for the various maintenance projects. Once projects are identified, park unit staff use the Park Service’s Capital Investment Strategy to rank maintenance projects for funding decisions. However, the Park Service has not evaluated its process for making asset maintenance decisions to determine if it is achieving intended outcomes. To assign maintenance priority to an asset, the Park Service uses two tools to rate an asset’s importance and condition—the asset priority index (API) and facility condition index (FCI)—both of which are consistent with asset management guidance from the Office of Management and Budget and the National Academies Federal Facilities Council. Park staff use the ratio of API to FCI to assign assets to a level of maintenance priority, called an optimizer band, and document how these calculations were made in Park Asset Management Plans. API identifies the relative importance of the various assets at a park. To do this, park unit staff use four weighted criteria to determine an asset’s API value on a scale from 1 to 100, in which assets that scored 100 are most important. The criteria follow: Resource preservation. This criterion identifies whether the asset directly contributes to a park’s ability to preserve natural resource processes, is a cultural asset, or enhances a park’s ability to preserve and protect its cultural resources. This criterion determines 35 percent of an asset’s API. Visitor use. This criterion identifies the extent to which the asset contributes to visitor accessibility, understanding, and enjoyment. Assets are rated as high, medium, low, or none, and the score contributes to 25 percent of an asset’s API. Park support. This criterion considers the extent to which an asset directly supports day-to-day operations of a park unit or employees’ ability to perform park operations. Assets are also rated as high, medium, low, or none under this criterion, and it contributes to 20 percent of an asset’s API. Asset substitutability. This criterion refers to the degree to which a comparable substitute asset exists to fulfill the functional requirements or purpose of that asset. To rate this criterion, park unit staff consider the question “if this asset is lost, what would be the impact,” and park unit staff are to answer high impact, low or no impact, or there is no substitute for the asset. This criterion determines 20 percent of an asset’s API. Park unit staff establish values for these criteria by answering a series of questions about each asset that are included in guidance provided by the Park Service. For example, one question used to establish an asset’s visitor use value is if the asset provides access to, houses, or delivers visitor understanding through education. The total API for each asset is recorded in the agency’s FMSS. Park Service officials said that an asset’s API should not change unless something substantial occurs, such as an asset being destroyed in a weather event or being taken out of service because it is no longer needed. If park unit staff determine that an asset’s API value should be changed, regional approval is needed to make the change. The Park Service’s use of API as part of its process for making asset maintenance decisions is consistent with the Office of Management and Budget’s Capital Programming Guide, which states that the use of tools such as API helps managers identify the most important assets and provides logical guidance for directing limited funding. In addition, the guide notes that API is important for planning for recurring maintenance and preventive maintenance. FCI is a method of measuring the current condition of an asset to assess how much work, if any, is recommended to maintain or change its condition to acceptable levels to support organizational missions. The Park Service uses FCI to rate the condition of an asset on a scale from 0 to 1. It is calculated by dividing the deferred maintenance associated with an asset by its current replacement value, and the lower the asset FCI value, the better the condition of the asset. For example, a new asset would likely have little or no deferred maintenance associated with it and therefore have a low FCI. Park unit staff record the projected cost of repairs and current replacement value for each asset in FMSS and update those values when appropriate. To calculate the projected cost of an asset’s repair, park unit staff use the Park Service’s cost estimating software system. The system bases calculations on industry standard tools, materials, and methods according to data from a North American supplier of construction cost information. In addition, the Park Service instructs staff to consider adjustments as needed, such as when requirements for historically accurate materials or construction methods are required. For example, according to a maintenance official at Independence National Historic Park in Philadelphia, repairs to a deteriorated rain gutter on Independence Hall were more complex and expensive than similar work on modern or nonhistoric buildings. A failed section of the gutter’s downspout had to be replaced with historically accurate materials in order to meet cultural resource preservation standards. To avoid damaging historic building fabric nearby, the new section of pipe was soldered into place with unique equipment that did not use a flame. Park unit staff calculate the current replacement value of an asset using the Park Service’s current replacement value calculator. According to Department of the Interior policy, current replacement value is defined as the standard industry cost and engineering estimate of materials, supplies, and labor required to replace an asset at its existing size and functional capability, and to meet applicable regulatory codes. The Park Service’s use of FCI as part of its process for making asset maintenance decisions is consistent with the Federal Real Property Council’s Guidance for Real Property Inventory Reporting, which identified this type of condition index as a performance measure. In addition, in 2005, the National Academies Federal Facilities Council found that many agencies use FCI to measure the current condition of assets to assess how much work, if any, is recommended to maintain or change their condition to acceptable levels to support organizational missions. The Park Service uses the ratio of an asset’s API and FCI to assign the asset to an optimizer band, which is used to determine the priority level for maintenance. The agency began using optimizer bands in 2012 to help determine which projects would obtain project funds and ensure that limited funds are allocated to the most important assets. According to Park Service guidance, optimizer bands act as a triage framework for allocating limited funds. For example, optimizer band 1 assets are the highest-priority assets. The agency defines them as critical to the operations and mission of a park unit or as having high visitor use. They are to be considered first for funding to keep them in good condition. In contrast, optimizer band 5 assets are the lowest-priority assets. The agency does not need them for the operations and mission of the park, and many of these assets may be candidates for disposal. To ensure that the highest-priority assets are maintained to the greatest extent possible, the Park Service established minimum levels of funding park units are to allocate for preventive maintenance. Specifically, park units are to use funds from the operations budget category to address a minimum of 55 percent of the preventive maintenance work needed to maintain optimizer band 1 assets in good condition. The minimum levels of funding for optimizer bands 2 and 3 are 50 and 25 percent, respectively, and there are no minimum levels of funding for bands 4 and 5. Park unit staff have some flexibility in assigning assets to optimizer bands. Park Service officials said that park units may reassign an asset to a different optimizer band, but that these changes are to be approved by regional officials. Officials we interviewed at some of the park units said that they had changed optimizer bands for some assets. For example, one park unit changed the optimizer band of the building where park unit staff work from optimizer band 3 to optimizer band 2. Officials at this park unit said that the building is vital to the park because it is the only building space in or near the park that staff can use to perform the administrative duties required for managing a park unit, including maintaining FMSS. However, the building had been assigned to optimizer band 3 because it has no visitor use, which meant that the asset was not a priority for maintenance funding and therefore difficult to keep in acceptable condition. Officials at other park units identified reasons to change the optimizer band levels. For example, officials at two park units noted that the quality of housing can be poor because of low maintenance priority, which can affect both the ability of staff to do their jobs well and the visitor experience. According to Park Service officials, housing can also deteriorate past the point of acceptable living conditions, at which point park units would no longer be able to use those assets to house employees. Park unit staff report asset optimizer bands, as well as API and FCI, in their Park Asset Management Plans. Many of the park unit officials we interviewed said that they had most recently established these values for their assets within the last 7 years, often as part of updating the park unit’s Park Asset Management Plan. According to Park Service asset management guidance, a Park Asset Management Plan is a strategic and operational plan that park units are to develop to articulate how the park unit intends to manage its asset portfolio over a 10-year period based on the analysis of asset data. Park Service officials said that park units use them to assess all of their assets and determine the amount of funds needed to maintain assets in good condition. The Office of Management and Budget’s 2015 Capital Planning Guide does not use the term optimizer band but notes that graphical representations of a distribution of assets graphed by their importance to mission and their condition can be a useful tool in segmenting and presenting asset portfolios. Specifically, by plotting an asset according to API and FCI, an agency can determine when an asset no longer supports the mission of the site or bureau or is a candidate for disposal because it has a low API and high FCI. The Park Service’s asset management plan instructs park units to determine the condition of park assets through annual condition assessments—high-level inspections that identify obvious and apparent deficiencies—and comprehensive condition assessments—more detailed assessments of assets performed every 5 years. The plan also instructs park unit staff to record condition assessment information in FMSS and update the projected cost of repairs of the asset. Officials we interviewed at several of the park units said that for annual assessments staff regularly visually inspect assets during the normal course of business, such as opening buildings for seasonal use or performing maintenance on nearby assets. Some officials also said that comprehensive assessments are either performed by park unit staff with expertise or contractors. For example, a regional official said that park units in the region hired contractors to inspect sewer lines as part of a comprehensive assessment, since they do not have in-house expertise to do so. The Park Service provides guidance to park unit staff on how to perform comprehensive assessments for each of the asset categories. Officials we interviewed at several park units said that they organize comprehensive condition assessments to account for 20 percent of park assets annually, so that they can complete a comprehensive condition assessment for all park assets within a 5-year period. Officials we interviewed at more than half of the park units said that they were unable to complete annual or comprehensive assessments of all assets on schedule because of other duties or scheduling challenges. Specifically, staff who are to conduct the assessments perform other duties, such as overseeing asset maintenance and entering and maintaining asset data in FMSS. Park unit officials also said that Park Service headquarters makes frequent data requests—for example, for electric utility metering data, or for verification of square footage values in FMSS—and that these data requests can interfere with park unit staff’s ability to complete tasks, including condition assessments, on time. In addition, some park units are located remotely or in challenging climates, making it difficult to inspect all assets on the recommended schedule. For example, officials we interviewed at three park units said that they had to hike or fly to certain assets because they are located remotely or the assets are inaccessible in the winter because of snow; however, winter might be the only time the park unit had staff available to conduct assessments. For Park Service paved roads and bridges, the Department of Transportation conducts condition assessments for the Park Service. According to Department of Transportation officials, they typically conducts condition assessments of major paved roads—thoroughfares in large parks with more than 10 miles of roadway—within each park unit once every 5 years and of secondary paved roads once every 10 years. In addition, the department conducts condition assessments of all bridges within each park unit once every 2 years in accordance with National Bridge Inspection Standards. Specifically, a team of six to eight Department of Transportation staff, working in teams of two, drive a vehicle with special equipment that can assess the condition of the pavement along park roads. Department of Transportation officials estimate that these teams have assessed about 5,900 road miles in the last 4 years; by comparison, the Park Service has 5,500 miles of paved roads. Once a road is assessed, the Department of Transportation provides the condition data to relevant park unit staff who enter the data into FMSS and determine if a work order is needed. Based upon the condition assessments, Park Unit staff create one or more work orders in FMSS that document an asset’s deficiencies. They, in turn, combine work orders to generate projects to conduct the maintenance work needed to address identified deficiencies. Specifically, according to Park Service officials, staff bundle a series of work orders to address multiple deficiencies—such as replace a door, paint a wall, or fix the roof—in one building as part of the same project. Work orders generally contain a basic description of the work needed and an estimate of the material and labor costs, among other things. Park units submit these maintenance projects annually to the regional offices as part of an agency-wide call for projects, which marks the beginning of the Park Service’s budget formulation cycle. Park Service headquarters officials provide guidance to help park units to identify which fund sources can be used for a project, among other things. The staff may also choose to address a deficiency directly, using the park unit’s facility operations funds rather than applying for project funds. Some park unit officials we interviewed said that they typically do this for maintenance work that is routine in nature, such as grounds keeping. The Park Service’s most recent agency-wide call for projects was for projects to be funded in fiscal year 2019, and it directed park units and regional offices to have projects ready for review by headquarters by April 3, 2017. As part of doing this, park unit staff identify which of the Park Service’s fund sources would be appropriate to use for the various maintenance projects. To identify the appropriate fund source, park unit staff use information about the nature of the maintenance work needed as identified in the project’s work orders, as well as the annual fund source guidance that the Park Service provides. The guidance for each fund source includes questions about the projects to help determine if the project is eligible for a particular fund source. For example, to obtain cyclic funds for a project, fund source guidance has directed park unit staff to explain how the project supports or extends the life cycle of the asset or how funding the project will positively affect visitor health and safety, among other things. Park unit staff also enter projects into the agency’s Project Management Information System. All projects entered into this system then compete for funds in the region, or nationwide, depending on the type of project and fund source. Since 2012, the Park Service has used its Capital Investment Strategy to evaluate and rank maintenance projects for funding. Agency officials stated that the Capital Investment Strategy was created to help ensure that park units do not allow assets to fall into a severe state of disrepair before repairing them. According to the Park Service’s Capital Investment Strategy Guidebook, the strategy is designed to promote several of the agency’s mission goals, including the repair and improvement of assets that parks commit to maintain in good condition and the disposition of nonessential facilities to reduce deferred maintenance. In addition, one of the strategy’s objectives is to enable the Park Service to demonstrate to Congress and others that the agency optimizes taxpayer dollars to preserve high-priority assets. To meet this objective, the Capital Investment Strategy uses a formula based on asset information to score projects and gives preference to projects that address assets in optimizer bands 1 or 2. As part of the agency-wide call for projects, park unit staff use the Capital Investment Strategy to score projects that will be funded by the cyclic maintenance, repair and rehabilitation, line item construction, Federal Lands Transportation Program, and recreation fees fund sources in the Project Management Information System. The formula used in the Capital Investment Strategy scores projects from 1 to 1,000 by, in part, individually evaluating each work order in a project according to FMSS data in four elements: financial sustainability, resource protection, visitor use, and health and safety. For example, the visitor use element considers investment in assets that directly enable outdoor recreation as well as interpretive media. The most points within this element are awarded to those projects that improve and sustain the experience of the greatest number of visitors. Projects are scored higher if they target optimizer band 1 or 2 assets for deferred maintenance reduction and optimizer band 5 assets for disposition. Park Service officials, either at the regional office or headquarters, review and approve projects for funding based on the fund source guidance provided as part of the agency-wide call for projects. For all fund sources except for line item construction, Park Service regional officials determine which maintenance projects are to receive funding by convening expert panels, which review the scored projects provided by the park units in their regions. For maintenance projects associated with the repair and rehabilitation fund source that are estimated to cost less than $1 million, the Park Service convenes a nationwide panel of experts to determine which will be funded. For line item construction projects estimated at more than $1 million, Park Service headquarters staff review and select which projects will receive funding. The Park Service identifies the line item construction projects the agency wants to fund by name, description, estimated cost, and project score in its annual budget justification submission to Congress. Fiscal year 2015 was the first budget year in which projects ranked using the strategy were funded, and as such some regional and park unit officials said that it is too soon to determine if the Capital Investment Strategy is meeting its objectives, such as maintaining the condition of its high-priority assets. Officials we interviewed at more than half of the park units said that the Capital Investment Strategy so far has helped them identify their park units’ most important maintenance needs. However, several regional and park unit officials said that the Capital Investment Strategy’s focus on optimizer band 1 and 2 assets could result in continued deterioration of assets in other optimizer bands, leading to increased deferred maintenance. The Park Service does not have a plan or time frame for evaluating whether the strategy has been successful. A senior official said that the agency had not determined what is needed to begin such an evaluation and that it would be beneficial to verify that the Capital Investment Strategy is achieving intended outcomes and if changes need to be made. According to the National Academies Federal Facilities Council, investments made in assets are not often immediately visible or measurable but are manifest over a period of years, and it is important that agencies track the outcomes of those investments to improve decision making about those investments and to improve asset management. Moreover, according to the council, to understand the outcomes of facilities investments, federal agencies need to establish facilities asset management performance goals that have a time frame for attainment, among other things. By evaluating the Capital Investment Strategy and its results after it has been in place for a few years, the Park Service may be able to determine if the strategy is achieving its intended outcomes or if changes need to be made. For example, the agency could consider evaluating the improvement or deterioration in the overall condition of assets in each optimizer band to determine whether the agency should continue to prioritize allocations to maintenance on optimizer band 1 and 2 assets. The Park Service is taking a variety of actions to help address asset maintenance needs and potentially reduce deferred maintenance. These actions include the following: Using philanthropic donations. The Park Service receives donations from several philanthropic sources to enhance park assets and, in some cases, address maintenance needs. For example, the National Park Foundation intends to raise up to $350 million to support the Park Service as part of its Centennial Campaign for America’s National Parks. The foundation reported in February 2016 that it had raised about $200 million toward this goal. Some of these funds are to be used to address asset maintenance, such as repairing trails at Jenny Lake in Grand Teton National Park in Wyoming and rehabilitating Constitution Gardens, part of the National Mall, in Washington, D.C. Stemming from the National Park Foundation’s efforts, in July of 2014 the Park Service announced a donation of about $12 million to restore and improve access to Arlington House, the Robert E. Lee Memorial, which is located in Arlington National Cemetery. In addition, philanthropic funds are available through the Centennial Challenge program. From fiscal years 2015 through 2016, Congress appropriated $25 million for this program. For projects funded by this program, at least 50 percent of the costs must come from nonfederal donations. According to Park Service documents, the agency has selected more than 150 projects to be funded by the program, which as of October 20, 2016, had received more than $45 million in matching funds from philanthropic donors. Some of the projects are to directly address deferred maintenance, such as a project at the Chesapeake and Ohio Canal National Historical Park that will rehabilitate the Conococheague Aqueduct in Maryland. Working with volunteers. Volunteer groups are providing assistance to several parks to help address asset maintenance needs. For example, at the Great Smoky Mountains National Park, in North Carolina and Tennessee, a local volunteer group maintains several of the park’s trails. Park unit officials said this arrangement has reduced the deferred maintenance for the park’s trails by about $1 million since 2009. Officials at some of park units we interviewed said volunteer groups perform a variety of maintenance duties that help address deferred maintenance, including grounds and facilities cleanup, clearing roadways of vegetation, and campground maintenance. Leasing properties. Several park units are leasing assets to other parties in exchange for the lessee rehabilitating or maintaining the assets. According to Park Service documents, all net income from such leases is to be reinvested to fund historic preservation, capital improvements of historic properties, park infrastructure, or any deferred maintenance needs. For example, the Park Service leases several buildings at the Golden Gate National Recreational Area in San Francisco. The Park Service also leases several historic buildings at Hot Springs National Park in Arkansas. Park unit officials said that the buildings at Hot Springs were in serious disrepair prior to being leased. The officials said that the lessees are to repair and rehabilitate the structures in lieu of rent for the first several years of their lease terms, thereby reducing the park’s deferred maintenance. Officials at some of the park units we interviewed said that they were considering implementing leasing programs, in part, to help reduce their deferred maintenance. To help facilitate leasing, the Park Service hired a national leasing manager in 2015 to formalize its leasing program, and some parks units and regions have developed active leasing programs. Engaging partners. The Park Service is engaged in partnerships where outside organizations are assuming some asset maintenance responsibilities. For example, the Park Service entered into a partnership with a nonprofit organization to operate and maintain the visitor center associated with Independence National Historical Park in Philadelphia. In this case, the Park Service owns the visitor center and contributes funds—about $800,000 annually—to cover some of its basic operating costs, but the nonprofit organization covers the majority of the facility’s operating and maintenance costs. Park unit officials said that the 30-year agreement with the nonprofit organization provides the park with a modern visitor center that maintenance staff do not have to physically maintain, and provides a location where rangers can be stationed to answer questions about the park. Entering into other arrangements. The Park Service is taking steps to reduce, or in some cases eliminate, the need to allocate maintenance funds to some park assets by entering into arrangements with other entities to manage those assets. For example, officials at two of the park units we interviewed said that they had turned over the some of their campgrounds to concessioners to operate and maintain. Partnering with states for transportation grants. The Park Service has worked with states to submit joint applications for a variety of Department of Transportation grants. For example, the Park Service, jointly with the District of Columbia’s Department of Transportation, received a $90 million grant from the Department of Transportation’s Fostering Advancements in Shipping and Transportation for the Long- Term Achievement of National Efficiencies (or FASTLANE) grant program to rehabilitate the Arlington Memorial Bridge, which links the District of Columbia to Arlington National Cemetery in Virginia. In addition, the Tennessee Department of Transportation received a $10 million Department of Transportation grant—Transportation Investment Generating Economic Recovery (or TIGER) grant—to complete a section of the Foothills Parkway that runs through Great Smoky Mountains National Park, in North Carolina and Tennessee. The Park Service contributed an additional $10 million and the state contributed an additional $15 million in funds toward the $35 million project. Park Service officials said that these actions have helped address deferred maintenance at some park units, but that not all of these activities are well-suited to all park units or all maintenance needs. For example, they said that not all park units have assets that would be desirable for leasing. In addition, officials at several park units we interviewed said that philanthropic donors generally prefer to donate funds to projects that enhance parks or add new features, as opposed to addressing existing maintenance needs. The Park Service has allocated $1.16 billion on average to maintain its assets in fiscal years 2006 through 2015, but its deferred maintenance has continued to increase. To address its maintenance needs, the agency uses tools that are consistent with asset management guidance from the Office of Management and Budget and the National Academies Federal Facilities Council. In addition, the Park Service has determined that its highest-priority assets should be considered first for funding to keep them in good condition, and park unit staff use the agency’s Capital Investment Strategy to rank and prioritize projects for funding. However, several of the regional and park unit officials we interviewed said that the focus on high-priority assets may result in continued deterioration of less-critical assets, thereby increasing deferred maintenance. The Park Service does not have a plan or time frame for evaluating whether the Capital Investment Strategy has been successful. We recognize that it may be too soon to determine if the strategy is meeting its objectives given that fiscal year 2015 was the first budget year in which projects ranked using the strategy were funded. However, evaluating the Capital Investment Strategy and its results in a few years may allow the Park Service to determine if the strategy is achieving its intended outcomes or if changes need to be made. To ensure that the elements of the agency’s process for making asset maintenance decisions are achieving desired outcomes, we recommend that the Secretary of the Interior direct the Director of the Park Service to evaluate the Capital Investment Strategy and its results to determine if it is achieving its intended outcomes or if changes need to be made. We provided a draft of this report to the Departments of the Interior and Transportation for review and comment. The GAO Audit Liaison from the Department of the Interior responded via e-mail on December 5, 2016, that the department agreed with our recommendation and also provided technical comments, which we incorporated as appropriate. The Department of Transportation had no comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretaries of the Interior and Transportation, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix III. Our objectives were to examine (1) how much the National Park Service (Park Service) has allocated to maintain assets in fiscal years 2006 through 2015, (2) the amount and composition of the Park Service’s deferred maintenance in fiscal years 2009 through 2015, (3) how the Park Service makes asset maintenance decisions, and (4) the actions the Park Service is taking to help address its maintenance needs. To examine how much the Park Service has allocated to maintain assets in fiscal years 2006 through 2015, we obtained and analyzed maintenance allocation data from the Park Service for that period. According to the Park Service, deferred maintenance data from the end of fiscal year 2015 were the most current data available when we began this review. We analyzed the data to determine the amount of funds the Park Service had allocated in each year to the eight fund sources the agency uses for maintenance work: cyclic maintenance, repair and rehabilitation, facility operations, line item construction, recreation fees for routine maintenance, recreation fees for capital improvements, recreation fees for deferred maintenance, and Federal Lands Transportation Program. We assessed the reliability of these data through interviews with Park Service officials who were familiar with these data and reviews of relevant documentation. We found these data to be sufficiently reliable for the purposes of our reporting objectives. We also examined Park Service budget documents, including several agency budget justifications, and interviewed relevant Park Service officials at headquarters, regional offices, and park units to better understand the fund sources used for maintenance work. In addition, we obtained and analyzed Park Service funding data for fiscal years 2006 through 2015 from the Office of Management and Budget MAX Information System to compare agency funding to maintenance allocations. To examine the amount and composition of Park Service’s deferred maintenance in fiscal years 2009 through 2015, we obtained and analyzed Park Service data on deferred maintenance for the agency’s assets. We obtained these data from the Facility Management Software System (FMSS), an agency-wide database that Department of the Interior agencies, including the Park Service, use to collect, track, and analyze asset management data. We also interviewed Park Service staff at the headquarters, regional, and park unit levels to better understand deferred maintenance. We began our data analysis with fiscal year 2009 because it is the first year the Park Service reported deferred maintenance for all of the assets under its management. From fiscal years 2006 through 2008, the Park Service reported deferred maintenance for eight major asset categories as agreed with the Office and Management and Budget—paved roads, buildings, campgrounds, housing, trails, unpaved roads, water systems, and waste water systems. In fiscal year 2009, the Park Service began reporting deferred maintenance for an additional category called all others to convey the deferred maintenance for assets that did not, by definition, fall into one of the other eight categories. The all others category used by the Park Service includes assets such as marinas, railroads, and interpretive media. We determined how the amount and composition of deferred maintenance had changed from fiscal year 2009 to fiscal year 2015 by obtaining Park Service reports generated from FMSS on the amount of deferred maintenance for each fiscal year by major asset category. According to the Park Service, deferred maintenance data from the end of fiscal year 2015 are the most current data available and include 409 park units as well as other properties under Park Service jurisdiction, such as regional offices. Park Service officials said that the data provided for each fiscal year were a snapshot in time that reflected the asset categorization methods in place at the time each report was generated. For example, officials said that some assets categorized as one type may have been treated as another type in a subsequent year. We analyzed these data over this period in both nominal and inflation-adjusted terms. We also obtained detailed data for fiscal year 2015 from FMSS for each of the Park Service’s more than 75,000 assets. We analyzed these data to identify how deferred maintenance varied according to certain key characteristics, such as asset priority, asset category, park unit age, and region. We assessed the reliability of the data by interviewing Park Service officials familiar with these data, observing those officials use FMSS, and reviewing relevant documentation. We found these data to be sufficiently reliable for the purposes of our reporting objectives. To determine how the Park Service makes asset maintenance decisions and to identify actions the Park Service is taking to help address maintenance needs, we interviewed relevant officials at the headquarters level. We also interviewed Department of Transportation officials who described the process the department uses to assess the condition of Park Service roads and bridges; park unit staff use these assessments to make maintenance decisions about those assets. In addition, we analyzed relevant documents, such as the Park Service’s Asset Management Plan, asset maintenance guidance documents, the Capital Investment Strategy Guidebook, and fact sheets, to obtain additional information about the process and tools. We compared information we learned about the Park Service’s process for making asset management decisions to the Office of Management and Budget’s Capital Programming Guide, the Federal Real Property Council’s Guidance for Real Property Inventory Reporting, and the National Academies Federal Facilities Council’s Key Performance Indicators of Federal Facilities Portfolios. We supplemented our analysis with information obtained from our prior reports. In addition, we interviewed the chief facility management official at each of the Park Service’s seven regions to understand the role they play in overseeing and managing maintenance needs at the park units within their regions. We also asked them to identify park units within their regions that had taken notable actions to help address deferred maintenance. We conducted semistructured interviews at 21 park units to learn about the process each follows to make asset maintenance decisions as well as actions the park units were taking to help address deferred maintenance. From the 409 park units in existence when we began this review, we selected 21 parks based on several criteria. Specifically, for each of the seven regions, we selected three parks: (1) the park with the greatest amount of deferred maintenance as of the end of fiscal year 2015, (2) a park that had a deferred maintenance amount in the lower half of all park units in that region, and (3) a park unit recommended by regional officials as taking additional or notable actions to help address deferred maintenance. We also ensured that these parks represented different types of park unit types, such as scenic, historical, military, recreational, and seashores. This sample is not generalizable to all park units. We visited 3 of these park units in person—(1) Independence National Historical Park, in Philadelphia; (2) George Washington Memorial Parkway, in Maryland, Virginia, and Washington, D.C.; and (3) Manassas National Battlefield Park, in Virginia—and interviewed officials with the remaining 18 park units by telephone or in person. See table 3 for a list of the parks we selected, along with selection criteria. We conducted this performance audit from July 2015 to December 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 4 shows the top 100 National Park Service park units ranked by the amount of deferred maintenance as of the end of fiscal year 2015. In addition to the contact named above, Elizabeth Erdmann (Assistant Director), Ying Long, Mick Ray, Anne Rhodes-Klein, and Michelle K. Treistman made key contributions to this report. Additional contributions were made by John Bauckman, Anna Brunner, Greg Campbell, Antoinette Capaccio, Scott Heacock, Carol Henn, Kim McGatlin, John Mingus, and Carmen Yeung.
The Park Service manages more than 75,000 assets, including buildings, roads, and water systems, at 413 park units across all 50 states. In 2015, the agency estimated that its deferred maintenance on these assets was $11.9 billion. GAO was asked to review how the Park Service manages its maintenance needs. This report examines, among other things, (1) agency allocations to maintain assets in fiscal years 2006 through 2015, (2) the amount and composition of the agency's deferred maintenance in fiscal years 2009 through 2015, and (3) how the agency makes maintenance decisions. To conduct this work, GAO analyzed Park Service allocation data for fiscal years 2006 through 2015 and deferred maintenance data in fiscal years 2009 (first year data for all assets was available) through 2015 (most current data available); reviewed planning and guidance documents and compared the process for making asset management decisions to guidance developed by the National Academies, among others; and interviewed Park Service officials at headquarters, all seven regions, and 21 park units selected to include those with large and small amounts of deferred maintenance, among other things. This sample is not generalizable to all park units. In fiscal years 2006 through 2015, the Department of the Interior's National Park Service (Park Service) allocated, on average, $1.16 billion annually to maintain assets. In fiscal year 2015, allocations to maintenance accounted for about one-third (or $1.08 billion) of the agency's total funding of $3.3 billion. The largest portion of maintenance funds in fiscal year 2015 was allocated to facility operations, which includes maintenance that is routine in nature, such as maintenance of trails. The Park Service's deferred maintenance—maintenance of its assets that was not performed when it should have been and is delayed for a future period—averaged $11.3 billion from fiscal years 2009 through 2015. Bridges, tunnels, and paved roadways consistently made up the largest share of the agency's deferred maintenance, accounting for half of all deferred maintenance in fiscal year 2015. Older park units have the most deferred maintenance, with $10.5 billion in fiscal year 2015 in park units established more than 40 years ago. The Park Service uses several tools to determine an asset's importance and condition and assign maintenance priority. Park unit staff assess the condition of the asset and identify maintenance projects. Once identified, park unit staff use the agency's Capital Investment Strategy to evaluate and rank projects. Projects score higher if they target critical assets with deferred maintenance. Fiscal year 2015 was the first budget year in which projects ranked using the strategy were funded, and regional and park unit officials said that it is too soon to determine if the strategy is meeting its objectives, such as maintaining the condition of its most important assets. However, the Park Service does not have a plan or timeframe for evaluating whether the strategy has been successful. A senior official said that the agency has not determined what is needed to begin such an evaluation and that it would be beneficial to verify that the Capital Investment Strategy is achieving intended outcomes. According to the National Academies Federal Facilities Council, it is important that agencies track the outcome of investments to improve decision making and asset management. Evaluating the strategy may help the Park Service determine if the strategy is achieving intended outcomes or if changes need to be made. GAO recommends that the Park Service evaluate the Capital Investment Strategy and results to assess whether it has achieved its intended outcomes. The Department of the Interior agreed with GAO's recommendation.
The Community Development Block Grant (CDBG) program, created in 1974 and administered by the Department of Housing and Urban Development (HUD), is the most widely available source of federal funding assistance to state and local governments for neighborhood revitalization, housing rehabilitation activities, and economic development. Eligible activities include housing assistance, historic preservation, real property acquisitions, mitigation, demolition, and economic development. Because of the funding mechanism that the CDBG program already has in place to provide federal funds to states and localities, the program is widely viewed as a convenient, ready-made solution for disbursing large amounts of federal funds to address emergency situations. Eligible activities that grantees have undertaken with CDBG disaster recovery funds include public services, relocation payments to displaced residents, acquisition of damaged properties, rehabilitation of damaged homes, and rehabilitation of public facilities such as neighborhood centers and roads. Over the past two decades, CDBG has repeatedly been adapted as a vehicle to respond to federal disasters such as floods, hurricanes, and terrorist attacks. For example, Congress provided CDBG disaster relief funds to aid long-term recovery efforts after the Midwest floods in 1993 and for economic revitalization after the 1995 bombing of the Murrah Federal Building in Oklahoma City. CDBG funds were also provided to New York City in the aftermath of the September 11, 2001, terrorist attacks to aid in the redevelopment of Lower Manhattan. When the CDBG program is used to provide disaster relief funds, many of the statutory and regulatory provisions governing the use of CDBG funds are waived or modified, thereby providing states with even greater flexibility and discretion. Following the 2005 Gulf Coast hurricanes, $19.7 billion in disaster CDBG funds were provided to the most affected states—Louisiana, Mississippi, Alabama, Florida, and Texas—through three supplemental appropriations enacted between December 2005 and November 2007. The first CDBG supplemental appropriation, passed on December 30, 2005, provided $11.5 billion in CDBG funding and included, among others, a provision that prohibited any state from receiving more than 54 percent of that total appropriation. The second supplemental appropriation, passed on June 15, 2006, provided an additional $5.2 billion in CDBG funds and required that no state receive more than $4.2 billion of that appropriation. HUD was responsible for allocating the funds from these two supplemental appropriations among the five states in accordance with these and other statutory requirements. A third supplemental appropriation passed on November 13, 2007, and provided an additional $3 billion exclusively for Louisiana. According to HUD’s estimates, a total of 305,109 housing units suffered major or severe damage or were completely destroyed along the coast of Louisiana, Mississippi, Alabama, Florida and Texas. For perspective, figure 1 shows the states’ total damages in terms of housing units. Louisiana suffered the greatest amount of devastation compared to any other Gulf Coast state with an estimated 204,737 damaged housing units—equal to 67 percent of the total estimated damage in the Gulf Coast region. Mississippi had the second highest degree of destruction with an estimated 61,386 damaged housing units—20 percent of the total estimated damage in the region. The remaining damage—an estimated 38,986 housing units or approximately 13 percent of the total damage—was combined across Alabama, Florida, and Texas. Based on HUD’s analysis of housing damage estimates for each of the five Gulf Coast states and in accordance with specific congressional provisions, the department distributed the $19.7 billion in CDBG funds that were appropriated for recovery efforts in the region. Louisiana received the greatest amount—68 percent of the total CDBG funding or $13.4 billion. Mississippi received 28 percent of the total funding—$5.5 billion. The remaining 4 percent—almost $800 million—was allocated across Alabama, Florida, and Texas. Figure 2 shows a breakdown of the total amount of CDBG disaster recovery funds that HUD allocated to each state. Traditionally, grantees are afforded broad discretion as they decide how to allocate CDBG funds to specific projects and programs. In the aftermath of the 2005 Gulf Coast hurricanes and in an action similar to past disaster recovery situations, Congress provided additional flexibility to the states’ use of CDBG funds. For example, lawmakers permitted HUD to waive certain regulations and statutes that would otherwise be applicable including income targeting provisions and public service expenditure caps. Specifically, HUD was allowed to waive the threshold outlined in statute that 70 percent of total funds must be allocated to activities that primarily benefit low and moderate income persons. Instead, only 50 percent of the total funds had to be targeted on this basis unless the Secretary found a compelling need to waive the targeting provision altogether. In addition, HUD suspended statutory requirements that limit the amount of CDBG money that can be used to provide public services to the affected communities. In conjunction with these increased flexibilities, Congress prohibited HUD from waiving four specific program requirements—nondiscrimination, environmental review, labor standards, and fair housing. Once HUD allocated CDBG funds to the affected states, the state-level development agencies were responsible for the administration and management of these funds. In Louisiana and Mississippi, the two states that incurred the most damage, the authorities in charge of disaster recovery efforts were the Office of Community Development (OCD) and the Mississippi Development Authority (MDA), respectively. In Louisiana, OCD has managed the state CDBG program over the past two decades. After the 2005 Gulf Coast hurricanes, the Louisiana Commissioner of Administration created the Disaster Recovery Unit within OCD to administer the state’s share of CDBG disaster relief funds. Similarly, MDA’s Disaster Recovery Division was responsible for managing Mississippi’s share of CDBG disaster relief funds. Federal funding sources other than CDBG were available to states after the 2005 Gulf Coast hurricanes, but CDBG supplemental appropriations provided the largest amount of money. Other sources included the Public Assistance (PA) grant program and Hazard Mitigation Grant Program (HMGP), both administered by the Federal Emergency Management Agency (FEMA). In contrast to the PA grant program—which provides funds to support infrastructure recovery such as rebuilding schools, roads, and utilities—HMGP provides funds to states and local governments to implement long-term, cost-effective hazard mitigation measures after a major disaster. For example, HMGP funds may be used for projects such as flood-proofing properties, acquiring the property of willing sellers in hazard-prone areas and transforming it into open space, and retrofitting structures against earthquakes or hurricane-force winds. After the Midwest floods in 1993, CDBG and HMGP funds from HUD and FEMA respectively, were used to acquire privately held property within flood plain areas in the affected states and convert the land to public uses, such as recreation or green space. HUD allocations of CDBG disaster funds to the Gulf Coast states were designated for necessary expenses related to disaster relief, long-term recovery, and restoration of infrastructure in the most affected and distressed areas. States had great flexibility in choosing the types of recovery activities to initiate with their CDBG funds. Specific language in the supplemental appropriations acts required states to develop and submit action plans to HUD detailing the proposed use of all funds. Upon submission, HUD reviewed the action plans for acceptance. These action plans served as state proposals for how states would use their share of CDBG disaster funds and included descriptions of eligibility criteria and how the funds would be used to address both urgent needs and long-term recovery and infrastructure restoration. Any substantial program changes—presented as amendments to a state’s action plan for its use of CDBG disaster recovery funds—had to be submitted to HUD for review and acceptance. Recovery activities in Louisiana and Mississippi fell into four main categories: housing, infrastructure, economic development, and other projects. Both Louisiana and Mississippi devoted most of their allocations toward housing assistance, with a majority directed toward homeowners. For example, in November 2006, Louisiana allocated nearly 77 percent of its CDBG funds for housing assistance. Of that amount, approximately 80 percent was directed toward homeowners. In December 2006, Mississippi allocated 63 percent of its CDBG funds for housing assistance, of which approximately 98 percent was directed toward homeowners. Between 2006 and 2008 Louisiana and Mississippi modified the level of funding allocated for different recovery projects as shown in figures 3 and 4 below. For example, as permitted by CDBG guidelines, Louisiana increased the percentage of its total CDBG allocation for housing while Mississippi reallocated a percentage of its housing funds for economic development needs. Louisiana’s increased focus on housing largely resulted from the additional supplemental appropriations that Congress provided amid concerns that the state’s housing recovery program needed additional funds. As of November 2006, Louisiana received $10.4 billion in CDBG disaster relief funds. After Congress granted an additional $3 billion in CDBG funds exclusively for Louisiana, in November 2007, the state’s total increased to $13.4 billion. Mississippi’s reprogramming of funds was largely attributed to the state’s decision to repair one of its storm-damaged ports. The different approaches that Louisiana and Mississippi took for designing and developing their own homeowner assistance programs led to substantially different experiences. The specific goals and details of Louisiana’s and Mississippi’s initial designs differed significantly. Louisiana started with a program design that included incentives to promote home rebuilding and ensure retention of the state population. The primary concern for Louisiana state officials was to bring residents back to their communities to begin the process of rebuilding. On the other hand, Mississippi adopted a much simpler design, which awarded one-time, lump-sum payments to homeowners to compensate them for their losses, independent of their choice to rebuild. This helped Mississippi to avoid many of the challenges and delays that Louisiana would experience as discussed in the next section of this report. As Louisiana and Mississippi planned their housing recovery efforts, Louisiana designed different solutions than Mississippi did. Louisiana initially adopted a plan that tied federal funds to home reconstruction and controlled the flow of funds to homeowners while Mississippi paid homeowners for their losses regardless of their intentions to rebuild. Specifically, Louisiana initially created a program that incorporated certain elements from two different housing recovery program models: compensation and rehabilitation. Although there is no written guidance distinguishing between the two models, HUD officials explained to us what the major differences are between the two programs. Generally, in a rehabilitation program, funds are used explicitly for repairs or reconstruction projects. In contrast, a compensation program disburses grant payments directly to homeowners for the damages they suffered regardless of whether they intend to repair or rebuild. Furthermore, rehabilitation and compensation programs are subject to different legal and financial requirements in terms of HUD’s oversight responsibilities. HUD officials explained that under a rehabilitation model, binding federal funds to reconstruction triggers several federal requirements, including site-specific environmental reviews of each property. Federal and state officials said these environmental reviews can be costly and time consuming, taking perhaps several months to years to complete. Officials in both states said this was a key factor considered when deciding whether or not to adopt the rehabilitation model. Under a compensation model, environmental reviews are not required because recipients are not required to spend their grant proceeds on home repair and reconstruction. Senior HUD officials said that historically, CDBG funds have not been used for compensation programs, but rather rehabilitation, reconstruction, and rebuilding programs. According to senior HUD officials involved with administering CDBG disaster assistance, CDBG is not often used for compensation programs because it is difficult to know what recipients will do with the money. Louisiana’s other solution was to try to use multiple federal funding streams for its housing recovery program. Specifically, the state planned to finance the purchase of properties with CDBG funds and essentially pay itself back with FEMA funds. Mississippi, on the other hand, chose not to combine federal funds together in this way. Existing guidance was not sufficient to address Louisiana’s approach, and failures in communication hindered full understanding of the problems with the state’s particular program and funding designs. As a result, Louisiana encountered many challenges to implementing its recovery efforts that Mississippi did not. In Louisiana, there were two major problems stemming from its particular program and funding designs. The first was a misunderstanding between Louisiana officials and HUD staff as to whether the design for their housing recovery program could be considered a compensation program, as opposed to a rehabilitation program. The two types of programs have different regulatory requirements as noted above. In particular, rehabilitation programs require costly and time-consuming site-by-site environmental reviews, whereas compensation programs do not. Louisiana’s program design was labeled as compensation, even though it contained elements of a rehabilitation program. This led to more misunderstandings with HUD and delays in program implementation. The second problem came up when, on the advice of the Federal Coordinator for Gulf Coast Rebuilding (Federal Coordinator) according to state officials, Louisiana tried to use multiple federal funding sources for its housing recovery program. Louisiana state officials planned to use FEMA and HUD funds together for purposes allowable under the requirements for each funding source. However, the manner in which the state planned to use the funds to finance the state’s purchase of residential properties led the state to run afoul of certain programmatic and legal requirements governing the FEMA funds. All of these problems led to more delays in funding and program implementation. Louisiana state officials developed and started to implement the Road Home program, which evolved over the course of approximately 12 months between May 2006 and May 2007. Throughout, HUD staff provided technical assistance to state officials and conducted scheduled monitoring visits for oversight purposes. To explain the evolution of the Road Home program, we identified three key phases and their associated milestones: the original design (approved May 2006), the revision (accepted July 2006), and HUD’s cease and desist order (issued March 2007). Table 1 below highlights the time line of events surrounding the evolution of the Road Home program. This first phase began after HUD announced Louisiana’s first CDBG disaster funding allocation in February 2006. In the following months, Louisiana state officials established the goals of the Road Home program, which were aimed at encouraging residents to return to their neighborhoods and rebuild their storm-damaged homes. To meet the program goals, Louisiana state authorities developed the main housing assistance components of the program and offered homeowners three specific options through the program. Homeowners could: (1) rebuild homes on their own properties, (2) sell their properties and relocate within the state, or (3) sell their homes and relocate outside of the state. Those homeowners who chose to stay in Louisiana were eligible to receive a larger grant award than those who chose to leave the state. As noted above, in contrast to Louisiana’s home recovery plan, Mississippi chose early on to adopt a homeowner assistance program that was clearly within the terms of a compensation model. Once Louisiana homeowners applied for Road Home assistance, they had to select their preferred benefit option. Among the three choices available to homeowners, rebuilding was the most popular selection. Of the 143,580 homeowners who returned their benefit preference to state officials, almost 88 percent chose to stay and rebuild their storm-damaged homes. By April 2006, Louisiana officials completed the state’s action plan. The action plan outlined the different types and amounts of assistance available to homeowners, eligibility criteria, formulas to calculate recipient grant awards, and a general description of the disbursement process to transfer funds to eligible recipients. Under the initial design of the Road Home program, homeowners who chose to rebuild were required to meet code and zoning requirements and comply with the latest available FEMA guidance for base flood elevations. These homeowners were required to use the home as their primary residence for at least 3 years upon completion of repairs. Louisiana submitted the plan to HUD on May 12, 2006—3 months after HUD announced each state’s allocation of CDBG funds. The HUD Secretary approved Louisiana’s action plan for the Road Home program approximately 2 weeks later on May 30, 2006. The second phase began in the months immediately following the HUD Secretary’s approval of the Road Home action plan. Key HUD staff and state officials had different interpretations and expectations of exactly how the Road Home program would operate. Because HUD lacked written program guidance, there were no concrete federal definitions that HUD or the state could refer to. According to HUD officials who were involved in reviewing Louisiana’s action plan, their understanding was that Road Home would operate as a rehabilitation program—thereby requiring site- specific environmental reviews. In contrast, Louisiana state officials thought CDBG program rules provided sufficient flexibility so that the program they proposed qualified as a compensation model and would not require the environmental reviews. As a result, Louisiana officials revised the action plan in an attempt to resolve the conflicting interpretations and re-submitted the plan for HUD review. HUD officials said that they worked with state officials to revise the language in the action plan to reflect more of a compensation-type program so as to not trigger the site-specific environmental requirements. On July 12, 2006, HUD officials accepted the revised plan for Road Home as a compensation program. Upon HUD’s acceptance of the action plan, the state continued to develop the operational and payment structures to implement Road Home and initiated a pilot of the program. On August 11, 2006, the state provided HUD with additional Road Home program clarifications that further explained the formulas used to calculate grant award payments to homeowners and the covenants that would be placed upon the homes of those who chose to stay in their homes and rebuild. The covenants required that Louisiana homeowners rebuild and elevate their homes in accordance with applicable codes and local ordinances and that the home be owner-occupied for at least 3 years after receiving compensation and be covered by the appropriate insurance. The purpose of these covenants was to ensure that homeowners returned to their neighborhoods and helped to rebuild the community. The state also clarified that homeowners who did not fulfill the terms of their covenants may not receive benefits or may have to repay all or some of the compensation they received. In addition, homeowners would receive grant proceeds incrementally to ensure covenant compliance, but were not required to use the money for repairs and rebuilding costs. To meet the requirements for a compensation program, the action plan amendment explicitly stated that homeowners had complete discretion as to the use of the compensation they received. HUD approved the clarifications in the amendment on August 22, 2006. Another step forward for the program involved state officials establishing agreements with local financial institutions and defining their roles and responsibilities in the disbursement process, transferring funds from the state to eligible recipients. Agreements were also established between individual homeowners, the state, and the financial institutions and included specific provisions and requirements that outlined the terms of grant award disbursement. During this second phase, the state worked out some of the details of its original design. For example, homeowners who chose to rebuild their storm-damaged homes would receive their CDBG grant awards in incremental payments as they provided evidence to the state that rebuilding efforts were under way. Under the Road Home program, a homeowner would receive an initial portion of his or her grant proceeds equal to either $7,500 or 10 percent of his or her total grant proceeds— whichever is less—upon execution of a contract for repairs. Subsequently, the homeowner would receive a second portion of the grant proceeds equal to no more than one-third of the total funds necessary to rebuild his or her home upon completion of a commensurate amount of repairs. Similarly, a third payment would be disbursed upon completion of two- thirds of the necessary repairs followed by a final disbursement of the remaining grant proceeds. Final payment of the grant proceeds is withheld until the state receives verification that the work is actually complete thereby protecting the homeowner from potential fraudulent contractor action. Louisiana state officials chose this type of incremental payment system to provide assurance that federal funds were being spent as intended, and to provide information that could be used to measure the progress of homeowners’ rebuilding. Tying CDBG funds to home repairs in this manner is consistent with a rehabilitation program; however, Road Home had been labeled and approved as a compensation program. One year after Hurricane Katrina made landfall, the Road Home program was officially launched statewide in late August 2006 when the state began accepting applications from homeowners and continued forward with plans to disburse initial payments to those homeowners who participated in the pilot. Throughout the subsequent months, state authorities mailed out award letters to individual homeowners explaining the three options available to them and giving them deadlines for their participation. In addition, housing assistance centers, where eligible homeowners could speak with trained housing advisors to help guide them through the process and make informed decisions about their options, opened state- wide. Housing advisors also collected critical information from homeowners regarding ownership, insurance, and mortgage balances— information that was required to process individual applications and inform benefit calculation. In November 2006, Louisiana submitted another action plan amendment that, among other changes, removed any penalty for elderly homeowners who chose to relocate outside of Louisiana. HUD did not consider the contents of the November 2006 amendment to be substantial enough to require formal review, allowing state officials to continue forward with the Road Home program. By March 15, 2007, Louisiana had received more than 116,000 applications for Road Home assistance and disbursed approximately $214.4 million to nearly 3,000 homeowners. The third phase began on March 16, 2007, when HUD ordered Louisiana officials to “cease and desist” Road Home, approximately 7 months after the program was fully operational. According to HUD officials, they found that the program was operating more like a rehabilitation program— meaning that CDBG funds were paying for home repairs and reconstruction exclusively—and therefore, participating homes were subject to site-by-site environmental reviews. While there is no written documentation explaining HUD’s decision reversal nor any written federal guidance outlining the specific terms of a compensation program, key HUD officials provided us with a verbal explanation. Specifically, they said that HUD staff conducted a scheduled monitoring visit to Louisiana and reviewed the operating documents for the Road Home program, including the grant disbursement agreements. According to an e-mail from the HUD Assistant Secretary to key HUD staff involved in Gulf Coast recovery, there was an “apparent inconsistency” between Road Home program operations and the approved action plan. HUD would not allow the state to move forward with the program until adjustments were made to the disbursement process or until the state conducted the required site-by-site environmental reviews. According to Louisiana state officials, they were very surprised and frustrated by HUD’s decision because of the department’s previous acceptance of the Road Home program design. One top Louisiana official testified on May 8, 2008, before the House Financial Services Subcommittee on Housing and Community Opportunity that the environmental assessments were the single biggest reason for the state opting to implement a compensation model over a rehabilitation model. During the early development of Road Home, Louisiana conducted a study to estimate the costs, staff requirements, and time needed for full site-by- site property reviews. The state concluded that there was no practical way to cost-effectively perform the environmental assessments on well over 100,000 homes and expect to get rebuilding money “on the street” in a timely manner. State officials met with key HUD staff in Washington, D.C., to discuss HUD’s concerns and reconcile conflicting federal and state interpretations of what qualifies as a compensation program versus a rehabilitation program. However, as noted earlier, Louisiana officials believed that CDBG regulations provided sufficient flexibility for its approach to operate a compensation program. After these discussions, Louisiana state officials decided to abandon their plans to disburse grant proceeds incrementally and chose to provide funds to homeowners in lump-sum payments. State officials submitted an action plan amendment to HUD in early May 2007 to document this policy change. HUD approved the revised action plan shortly thereafter and said in its approval letter to state officials that the changes addressed the Department’s concerns that Road Home “did not comply with the requirements of a true compensation program.” Louisiana state authorities announced the policy change on April 7, 2007, explaining that homeowners would receive a full payment of their Road Home grant award in lump sum. They also recommended that homeowners consult with financial advisors, lenders, and housing counselors before beginning home repairs and encouraged homeowners to establish voluntary disbursement accounts with lenders to help guard against fraudulent contractor activity. While the state continued to accept homeowner applications, individual covenants had to be revised; homeowner grant awards had to be recalculated; and scheduled house closings were postponed. Many federal and state officials said that the original design of the Road Home program with the incremental payment process was better aligned with the state’s original priorities of ensuring long-term rebuilding and incorporating front-end assurances that federal funds were spent as intended than the lump-sum compensation program that was ultimately implemented. Senior HUD officials told us that a compensation model could disburse funds to homeowners incrementally rather than one-time, lump-sum payments, but the officials were unable to provide any examples where such an approach has been taken when disbursing CDBG disaster recovery funds. While the CDBG program provided much of the federal assistance in the aftermath of the 2005 Gulf Coast hurricanes, several other federal programs provided assistance to Louisiana to support the state’s comprehensive long-term recovery efforts, including among others, FEMA’s Hazard Mitigation Grant Program (HMGP). Louisiana was eligible to receive almost $1.5 billion from FEMA’s HMGP, which is part of a broad framework of FEMA initiatives authorized by the Robert T. Stafford Disaster Relief and Emergency Assistance Act. Similar to the evolution of Louisiana’s Road Home program design discussed above, the state’s proposal to use HMGP funds for homeowner assistance also evolved for more than a year. Soon after the first supplemental appropriation was enacted in late December 2005, federal and state officials entered into negotiations for a second appropriation of CDBG funding in light of concerns that the Road Home program needed additional funds. State officials reported that as part of these negotiations, the Office of the Federal Coordinator advised them on how to incorporate HMGP funding into the Road Home program. Doing so would have reduced the amount of additional CDBG funds the state would request from Congress. As part of their proposed funding design for the Road Home program, Louisiana state officials planned to use approximately $1.1 billion in HMGP funds in coordination with CDBG funds to purchase over 12,000 properties from homeowners who chose to relocate. The properties would have initially been purchased with CDBG funds. At some point, but not necessarily before their purchase, state officials and other stakeholders would determine exactly which properties would be converted to open space and which ones would be redeveloped. The cost of the properties converted to open space would then be reimbursed with HMGP funds, as converting properties to open space is an acceptable mitigation activity under HMGP rules and regulations. In traditional HMGP projects, funding is typically passed through the state government to a local sub-grant applicant to coordinate with property owners. In this case, the state itself would have been coordinating directly with property owners. According to FEMA officials, they first found out that the state expected to use HMGP funds toward Road Home in July 2006, after the state had already published its action plans, which stated this intention. FEMA also claimed that the agency was not included in the initial negotiations between the state and the Federal Coordinator’s office. However, Louisiana state officials reported that FEMA verbally committed to allowing the state to use HMGP funds for acquisitions. Throughout the second half of 2006 and much of 2007, state officials met regularly with FEMA as well as HUD—who did not oppose the initial plan—to work out an agreement for the use of HMGP funds. However, it still took over one year for FEMA and the state to come to an agreement over how HMGP funding could be used. In late September 2006, Louisiana submitted its application to FEMA for HMGP funds in accordance with its planned property acquisition project. In a December 13, 2006 letter to the Louisiana Governor’s Office for Homeland Security and Emergency Preparedness (GOHSEP)—the entity responsible for administering and managing the state’s HMGP funds— FEMA expressed its concerns with Louisiana’s application. FEMA denied Louisiana’s application to use HMGP toward the Road Home program in February 2007 because the agency asserted that the state’s intended plan to use HMGP for property acquisition did not meet statutory, regulatory, or programmatic requirements. Specifically, FEMA cited three main aspects of the proposal that formed the basis for its rejection: the plan exempted senior citizens from a specific financial penalty under the Road Home program, which violated FEMA’s statutory requirement of non-discrimination based on age; the plan did not adequately involve local jurisdictions; and the application itself was too general and did not contain project-level data and specific budget information. FEMA officials also indicated that the proposed project was inconsistent with the overall purpose of HMGP; namely, they perceived the Road Home proposal as more focused on redevelopment than long-term hazard mitigation. As noted in FEMA’s letter to Louisiana, the state did not have a plan to coordinate with local officials to identify which properties would become part of the HMGP program before their acquisition. However, state officials could not identify the total number of properties that would be converted to open space or redeveloped until homeowners indicated to the state their choice to either keep or sell their storm-damaged property in accordance with the benefit options available under the Road Home program. Subsequently, FEMA maintained that this arrangement would not provide enough detail for the agency to determine project eligibility for specific properties. It appeared that the Road Home program lacked sufficient budgetary resources, and the state’s priority was to “backfill” the CDBG account with proceeds from HMGP, according to one FEMA official. Louisiana state authorities submitted an appeal to FEMA on April 4, 2007. In that letter, GOHSEP urged FEMA to reverse its decision and allow the Road Home program to use HMGP funds to acquire property and transform it into open space. GOHSEP argued that the combination of the CDBG-funded Road Home program and the HMGP acquisition project had the great potential to reduce future damages more than any other hazard mitigation project ever funded. Table 2 below shows the time line of events surrounding the state’s attempt to leverage HMGP funding. FEMA denied the state’s appeal on July 16, 2007. According to FEMA regulations, HMGP funds cannot be used as a substitute or replacement to fund projects or programs that are available under other federal authorities, except under limited circumstances in which there are extraordinary threats to lives, public health or safety or improved property. HMGP funds may, however, be packaged or used in combination with other federal, state, local, or private funding sources when appropriate to develop a comprehensive mitigation solution. One FEMA official testified that HMGP is not designed to compensate individuals for disaster losses. Rather, HMGP provides communities with resources to implement long-term solutions that reduce the risk to citizens and public facilities from hazards. The Stafford Act permits HMGP funds to be used in connection with flooding for property acquisition as long as the property’s use is compatible with an open space, recreational, or wetlands management practice, among other requirements. In Louisiana’s case, the state’s planned use of HMGP funds was to transform flood-prone properties into open space and relocate homeowners out of harm’s way. In a May 24, 2007, hearing before the Senate Homeland Security and Governmental Affairs Subcommittee on Disaster Recov FEMA’s Assistant Administrator for Mitigation testified that the agency “agreed in concept to this approach and began developing the legal and programmatic framework to make it work.” However, FEMA later determined that the state’s implementation approach did not allow for compliance with HMGP requirements. Although HUD and FEMA are bound by similar nondiscrimination statutes and regulations to determine project eligibility for their respective disaster recovery programs, the agencies reached different conclusions about their ability to fund the Road Home program. FEMA’s determination that it could not fund the program because of HMGP’s non-discrimination requirements prevented HMGP funding from being used for property acquisitions as initially intended by the state. Under the Road Home program, homeowners who chose to sell their properties to the state and relocate outside Louisiana or those who sold homes and remained in the state without purchasing new properties incurred a financial penalty. Specifically, homeowner grants were reduced by 40 percent. Elderly homeowners (65 or older as of December 31, 2005), however, were exempt from this penalty. HUD did not find this measure violated the non- discrimination requirements applicable to CDBG funds, which were used to fund the Road Home program. However, citing the Stafford Act, FEMA determined that such an exemption was discriminatory toward homeowners under the specified age limit. From the states’ perspective, it was not necessarily clear how two separate nondiscrimination provisions would be applied to the Road Home program. In effect, different federal determinations prohibited the state’s efforts to design and implement this piece of its housing recovery program. Louisiana successfully redesigned the program to use HMGP funding primarily for elevation grants, but the vast majority of homeowners have yet to receive funds. After FEMA denied the state’s appeal to use HMGP funds for property acquisitions, state officials redesigned their request, and submitted a new application to FEMA to use HMGP funding for homeowner elevation grants and reconstruction projects. FEMA approved the plan, and in October 2007 the agency approved a small batch of test properties as eligible to receive HMGP funds. Although the state administered the new elevation/reconstruction project through the Road Home program, HMGP funds were not integrated with CDBG funds as they would have been under Louisiana’s original acquisition proposal. Both federal and state officials characterized the HMGP elevation project as running on a separate but parallel track to the CDBG-funded homeowner assistance program. Homeowner demand for the projects has been less than expected, in part because of the length of time it has taken to develop and implement the program. Consequently, the state has reallocated funds from the state-run HMGP elevation/reconstruction program to community- led traditional HMGP projects. Our past work found that the application process for HMGP funds can be complex and time and resource intensive. Long delays can occur in receiving funds, which can lead to additional obstacles for local communities. For example, delays in receiving grant funds can prevent a city from being more cost-effective in terms of mitigation. These types of HMGP delays were only exacerbated in Louisiana when the state attempted to use HMGP funds alongside CDBG funds. While FEMA has taken several steps to streamline their review processes, every property must still meet FEMA eligibility requirements, including environmental and historical preservation requirements. In our prior work, one local mitigation official said that it would be most effective to conduct mitigation activities immediately after a storm event, when damages are being repaired, rather than waiting for HMGP funds to become available. According to FEMA, while states normally have up to one year from the date of a disaster declaration to apply for HMGP funds, the approval process can begin much earlier following a disaster if state and local officials have previously identified viable mitigation projects that are consistent with state and local mitigation plans. However, without effective communication between federal agencies, states are challenged to coordinate the multiple streams of federal funding typically needed to address recovery from catastrophic disasters. In the immediate aftermath of such a catastrophic disaster, Louisiana and Mississippi state development agencies lacked sufficient capacity to effectively manage billions of dollars in federal assistance. This is most evident in the human capital challenges state agencies faced as they designed and developed state CDBG programs of unprecedented size. We found that Louisiana and Mississippi employed similar approaches to build organizational capacity and address human capital needs, including the creation of new state entities, hiring private contractors, and hiring additional state agency staff. Prior to the 2005 Gulf Coast hurricanes, state community development agencies in both states had experience managing CDBG program budgets of similar size to one another. For example, between fiscal years 2002 and 2005, the average annual CDBG program administered at the state level was budgeted at approximately $34.6 million and $35.6 million for Louisiana and Mississippi respectively. Funding allocations for state CDBG programs have been generally declining in recent years. Specifically, Louisiana’s state CDBG program budget decreased to $27.6 million in 2008 while Mississippi’s budget decreased to $29.8 million. The amount of CDBG disaster recovery funds Congress provided after the Gulf Coast hurricanes translated into enormous increases in both states’ CDBG budgets. Table 3 shows the annual state CDBG budgets for Louisiana and Mississippi compared to the amount of CDBG disaster funds provided to each state for Gulf Coast hurricanes recovery and rebuilding efforts. Both states were suddenly responsible for managing and administering multibillion dollar programs that were substantially larger than their more typical multimillion dollar programs. Both states created new entities to coordinate and oversee rebuilding efforts and to serve as policymaking bodies responsible for planning and coordinating efforts throughout the state. In Louisiana, the governor created the Louisiana Recovery Authority (LRA) within the state’s executive branch in October 2005. As part of its responsibilities, LRA was charged with establishing spending priorities and plans for the state’s share of CDBG funds, subject to approval of Louisiana’s state legislature. LRA’s primary goals included securing funding for recovery and rebuilding, identifying and addressing critical short-term recovery issues, and providing oversight and accountability. While LRA was responsible for developing and issuing policies on the state’s recovery, the Office of Community Development (OCD) was responsible for administering the Road Home program and managing the day-to-day implementation of LRA’s policies. In 2008, under the leadership of a newly elected governor, OCD merged with LRA creating a more centralized structure for authority and oversight of the state’s recovery activities. Moreover, the executive director of the recently combined LRA and OCD now serves as the governor’s authorized representative to the President for disaster recovery in Louisiana. According to one state official we spoke with, this consolidated leadership structure has improved the operation of the Road Home program. Similarly, in Mississippi, the governor created the Governor’s Office of Recovery and Renewal in January 2006, which served as a policy-oriented body and had the primary responsibility for designing the state’s various recovery programs and shaping the state’s overall approach to rebuilding. Among its responsibilities, the office coordinated relief efforts among federal and state agencies and other public and private entities. Its primary objectives included, obtaining the maximum amount of federal funds and maximizing the use of credit in lieu of cash, providing policy advice and formulation to the governor and state agencies, providing technical assistance and outreach to local governments, and facilitate the implementation of recommendations made by the Governor’s Commission. While the Governor’s Office of Recovery and Renewal is responsible for setting policies for long-term recovery plans, Mississippi Development Authority (MDA) is responsible for the implementation of these policies and administering the state’s CDBG disaster recovery programs. Officials from the state development agencies—OCD and MDA— recognized the need to build the states’ organizational capacities to address the enormous task of developing and managing massive housing recovery programs. In response, both Louisiana and Mississippi hired additional state agency staff; however, Mississippi lagged behind Louisiana in this effort. Specifically, Louisiana OCD made use of state civil service provisions that allowed the agency to recruit higher-salaried, term- appointment managers who were well-qualified disaster recovery experts. According to the former Executive Director of OCD, despite most of the key staff in the agency having more than 20 years of experience working with the CDBG program, the agency needed additional help. OCD hired staff from other states who had experience implementing federal housing programs and with CDBG disaster program funding. These individuals came from various states including Kentucky, New York, North Dakota, and Pennsylvania and were placed in top management positions within the agency. In Mississippi, top agency officials also acknowledged that the state and local governments were overwhelmed by the scale of destruction left by the 2005 Gulf Coast hurricanes. One top MDA official said the agency did not have a sufficient number of staff in place—particularly staff with expertise in CDBG-funded disaster recovery programs—to administer a wide range of new programs. Top MDA officials said that they had approximately 20 people assigned to disaster recovery positions in the aftermath of the 2005 Gulf Coast hurricanes. Eventually, MDA increased its disaster recovery staffing level in 2008 after receiving an evaluation and recommendations from HUD. For example, an audit conducted by the HUD Inspector General found that MDA did not have adequate staff to monitor implementation of the state’s Homeowner Assistance Program and had not established the required monitoring processes. The supplemental appropriations act that provided CDBG funds for recovery from the 2005 Gulf Coast hurricanes, coupled with the state’s HUD- approved action plan, required that MDA establish and implement monitoring processes to ensure that program requirements were met and to provide continuous quality assurance. In addition, HUD’s Office of Community Planning and Development, which was the office responsible for administering and overseeing CDBG disaster recovery funds, found that MDA had insufficient separation of duties, thereby negatively affecting the state’s fiscal controls and accounting procedures for billions of federal CDBG dollars. Specifically, HUD found that one individual had multiple roles of authority as a program and financial manager and monitor as well as an invoice approval and reconciliation manager. In response to HUD’s findings, MDA restructured the agency and created a separate bureau to handle all reporting and monitoring responsibilities. The agency also hired 30 people, for a total of approximately 50 employees assigned to administering and managing the state’s disaster recovery work. In addition to hiring additional agency staff, both states contracted with private firms to help state development agencies implement and manage their housing assistance programs. Both states hired contractors to set up customer service centers, process applications, determine and verify eligibility and calculate damage compensation amounts, develop tracking procedures, prevent duplication of benefits, and develop cost estimates. Louisiana officials recognized that even with the additional agency staff, the agency still did not have the capacity to manage all of those operations. When reviewing potential contractors to implement the Road Home program, state officials included multiple stakeholders in an inclusive and transparent process. For example, the top OCD official at the time brought in experts from other states to score and rank the various proposals. ICF International’s proposal was unanimously chosen by the selection team with the support of the state legislature and state attorney general’s office. The initial contract between OCD and ICF International cost approximately $756 million, but the cost increased to $912 million in December 2007 when the number of homeowners estimated to receive assistance increased from 100,000 to 160,000. Similarly, MDA recognized its need to build capacity and contracted with Reznick Group to implement its Homeowners Assistance Program. One top MDA official said the agency relied heavily on Reznick to manage program operations on the ground. Mississippi’s contract with the firm cost an estimated $88 million. According to state officials in both Louisiana and Mississippi, the state development agencies responsible for designing and administering CDBG- funded housing recovery programs needed regular, on-site technical assistance from HUD staff. While HUD staff did conduct four to five on- site monitoring and technical assistance visits per year as part of its oversight responsibilities, a number of state officials pointed to a need for clarification and further explanation of various federal regulations, environmental requirements, and waivers related to the states’ use of CDBG funds in disaster recovery activities. HUD has field offices in both states; however, the CDBG disaster recovery program for the Gulf Coast was managed out of the agency’s Washington, D.C. headquarters office. In Mississippi, MDA officials asserted that some of their agency’s capacity challenges and frustration interacting with HUD could have been alleviated if one HUD official had been assigned to work in their office. Specifically, they identified two ways that on-site HUD assistance would have benefited the state. First, a HUD representative would have brought extensive CDBG expertise and helped to fill the knowledge gap at the state level. Second, a HUD representative with sufficient decision-making authority could have reduced bureaucratic delays and led to quicker program implementation. According to MDA officials, HUD’s first visit to Mississippi was in the fall of 2007, almost 2 years after Hurricane Katrina’s landfall. Similarly, top officials in LRA and OCD also highlighted the state’s need for on-site technical assistance, adding that such an arrangement could have helped Louisiana avoid some of the challenges it encountered with the Road Home program. For example, one top state official said that HUD’s on-site presence would have strengthened the state’s ability to evaluate its options during program design, particularly when choosing to implement a compensation model versus a rehabilitation model. Another top state official in Louisiana expressed frustration that HUD encouraged the state to be creative only to get “stuck” in trying to do so because of insufficient guidance from HUD on the nuances related to CDBG disaster funds. In that official’s opinion, it would have been more helpful if HUD’s role was less prescriptive but still provided a clear sense of direction to the states. Louisiana officials suggested that HUD representatives could work in state agencies for a couple months at a time, rather than providing technical assistance by phone—which was typically the case after the 2005 Gulf Coast hurricanes. The former top OCD official said that they never had HUD on the ground with them in that capacity. Officials in both states expressed frustration with CDBG as a funding delivery mechanism, and were critical of its effectiveness in disaster recovery programs. According to one key state official, CDBG is a totally inappropriate source for funding disaster recovery efforts and it is not well-designed to meet the immediate needs of residents. Other top state officials were critical of HUD’s assistance to the state because of HUD’s approach of force-fitting the program rules and regulations applicable to traditional CDBG programs to state disaster recovery programs. In the states’ view, the magnitude of the 2005 Gulf Coast hurricanes made many of the traditional CDBG rules that were not already waived or modified impractical and slowed the process. Additionally, state officials noted that in some instances there appeared to be disagreements between the stated policies of top HUD management and the technical assistance provided by mid-level HUD staff during program implementation. Although CDBG has been widely viewed as a convenient, expedient, and accessible off-the-shelf tool for distributing federal assistance funds to states, it proved to be slower, less flexible, and more difficult to manage than expected. The experiences in Louisiana and Mississippi provide insights when considering the effectiveness of adapting an existing funding delivery mechanism like CDBG when responding to catastrophic disasters. For example, broad discretion was granted to the states to tailor fit a CDBG program to their needs. However, when Louisiana took its specific approach to provide compensation to homeowners and encourage rebuilding, state officials encountered federal environmental requirements that made such a housing recovery approach impractical. While the environmental requirements were outlined in law and regulations, it was unclear as to which cases the requirements would apply. The lump-sum compensation design that both Louisiana and Mississippi ultimately chose channeled CDBG funds to homeowners with fewer assurances to the states that people would actually rebuild and contribute to community development. In Louisiana, state officials were challenged by the inconsistent and conflicting guidance they received from different federal entities when coordinating different federal funding streams. When states are faced with navigating the numerous complexities of a funding delivery mechanism like CDBG, along with other federal disaster recovery funds, it is critical that federal guidance and assistance be clear, concise, and consistent to help minimize misunderstandings, confusion, and program delays. This is particularly important when states are developing their approaches to disaster recovery. This is also true when states are managing other sources of federal funds, some of which may or may not be combined for projects of similar purpose. Louisiana’s experience with HMGP raises questions about the need for federal regulations or operational guidance that clearly outline the options and the limitations of coordinating different disaster- related funding streams in the aftermath of a catastrophic event. Valuable opportunities also exist for the federal government—primarily HUD, FEMA, and the Office of the Federal Coordinator—to reflect upon lessons learned to improve federal assistance to states devastated by a catastrophic disaster. At the state level, any disaster that creates such catastrophic damage and devastation will present state authorities with the immediate need and challenge of building additional human capital capacity. The steps Louisiana and Mississippi state officials took to address such challenges, including the creation of a policy-making and coordinating entity to lead recovery efforts and hiring experienced disaster recovery staff, provide valuable lessons at both the federal and state level for future disaster recovery efforts. We recommend that the Secretary of the U.S. Department of Housing and Urban Development take the following two actions: Develop and issue written CDBG disaster assistance program guidance for state and local governments to use as they begin to develop plans for housing recovery efforts and disbursing federal assistance to residents after natural and man-made disasters. Specifically, this guidance should clearly articulate what constitutes an acceptable rehabilitation program versus a compensation program, including an explanation of the implications of each program design; clarification of the legal and financial requirements with which states must comply; and an explanation of the types of program elements that may trigger federal environmental and other requirements. Coordinate with the Federal Emergency Management Agency to ensure that the new guidance clarifies the potential options, and limitations, available to states when using CDBG disaster assistance funds alongside other disaster-related federal funding streams. We provided a draft of this report to the Secretary of the Department of Housing and Urban Development (HUD) and the Secretary of the Department of Homeland Security (DHS) for comment. We received written comments from HUD, which are provided in appendix II. In a letter signed by the General Deputy Assistant Secretary for Community Planning and Development, HUD partially agreed with our recommendation that the department issue written disaster recovery program guidance and improve coordination with FEMA to clarify the appropriate uses of CDBG funds with other disaster-related federal funds, such as HMGP funding. In short, HUD agreed to provide a report describing the four housing compensation programs that have been implemented in the past. The department also agreed to make its forthcoming multiyear evaluation report of the housing compensation programs in Louisiana and Mississippi publicly available. HUD did not agree that providing further technical or binding guidance comparing housing compensation and housing rehabilitation designs was the correct action at this time. The department also feels that additional coordination with FEMA will be far more useful if the role of the CDBG program in disaster recovery is regularized. While HUD stated that it had no issues with the general direction of the recommendation, the department provided additional comments expressing three main concerns. These concerns are outlined below along with our response. First, HUD stated that compensation programs are not an eligible CDBG activity—except when acquiring property for a public purpose—unless the Secretary grants a statutory waiver to allow it. While HUD has issued guidance covering all aspects of housing rehabilitation programs, the department stated that it has issued no formal guidance for compensation programs because such programs have been seldom used and each compensation design has been different. Furthermore, because the requirements for compensation programs are tailored to the grantee’s specific program design, HUD does not consider this to be a “fruitful area for general guidance.” However, HUD agreed that it may be useful to compare already implemented housing compensation programs to a typical rehabilitation program to identify key areas where policies have differed and examine the application of environmental reviews and other requirements. As stated in our report, we note that compensation programs have been rare. In our view, that fact helps highlight the importance and need for the development of written CDBG disaster assistance guidance for housing compensation program design. While each compensation program may have its own unique design features, we continue to believe that HUD could improve its assistance to states by issuing guidance that clearly articulates the applicable legal and financial requirements, as well as the types of program elements that may trigger federal environmental and other requirements. We support HUD’s suggestion to compare past compensation programs with a typical rehabilitation design to identify differences in policies and the application of environmental requirements. The results of such a comparative study would contribute to the department’s development of new written guidance. Second, HUD stated that it is currently conducting a multiyear evaluation of the housing compensation programs in Louisiana and Mississippi—the results of which are expected to clarify whether HUD will support housing compensation programs in the future. For this reason, the department hesitated to develop guidance that would be “premature” if issued prior to completion of the evaluation. Hopefully, the results of HUD’s evaluation will provide valuable insights on the effectiveness of compensation programs for disaster recovery. We agree that HUD should wait to develop guidance until it completes its evaluation. However, if the department chooses to continue to allow housing compensation programs, we continue to stand by our recommendation that the department issue written guidance. Similar to the comparative study discussed earlier, the results of this multiyear evaluation would help to inform HUD’s efforts to develop written guidance for housing compensation programs. Third, HUD stated that the CDBG program is not a formal part of the federal government’s disaster recovery programs. The department anticipates that upon a presidential review of disaster recovery programs, the current administration may choose to either relieve the CDBG program of any disaster recovery role or grant it a permanent place among the array of federal assistance programs available to states for disaster recovery. If the latter happens, HUD stated that it would issue permanent regulations and supporting guidance. In addition, the department stated that it would be better positioned to coordinate with FEMA in advance of an event, rather than waiting for Congress to grant CDBG disaster assistance funds in the aftermath of an event. As we noted in our report, Congress has turned to the CDBG program to provide disaster assistance to states at least 20 times over the past two decades. In response to HUD’s comment, we recommend that the department continue to engage the presidential administration on this issue. If the CDBG program continues to assume a disaster recovery role, we reiterate our recommendation that HUD issue written guidance for housing compensation programs, including, among other things, an explanation of program elements that trigger federal environmental reviews. We continue to believe the issuance of written HUD guidance that clearly articulates the differences between a compensation program and a rehabilitation program—including an explanation of the types of program elements that may trigger federal environmental reviews—will better aid state and local governments as they develop their plans for housing recovery efforts and disburse federal disaster assistance to residents. In addition, as long as the CDBG program continues to be a primary vehicle for distributing federal disaster assistance, we believe increased coordination between HUD and FEMA to ensure that the new guidance clarifies the potential options and limitations of using CDBG disaster assistance funds alongside other disaster-related funds would further aid state and local governments as they navigate the complexities of multiple federal disaster recovery program resources. Together, the implementation of these two recommendations should help to create clear, concise, and consistent federal messages to state and local governments and help to minimize the misunderstandings, confusion, and program delays that Louisiana officials experienced after the 2005 Gulf Coast hurricanes. DHS provided only technical comments, which were incorporated as appropriate. We also provided drafts of the relevant sections of this report to Louisiana and Mississippi state officials involved in the specific examples cited in this report. Both states provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. We will then send copies of this report to the Secretary for Housing and Urban Development, the Secretary of Homeland Security, other interested congressional committees, and state officials affected by the 2005 Gulf Coast hurricanes. We will make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals who made key contributions to this report are listed in appendix III. To examine how Gulf Coast states allocated their share of Community Development Block Grant (CDBG) funds, we focused our review on the states of Louisiana and Mississippi—the states most directly affected by the 2005 Gulf Coast hurricanes. To determine how the two states prioritized their rebuilding efforts and allocated their share of CDBG funds, we first identified the amount of funds provided to each state by reviewing Federal Register notices and the housing damage estimates that were used to determine each state’s allocation. Housing damage estimates were based on data from the Federal Emergency Management Agency (FEMA) and the Small Business Administration and were compiled in cooperation with the Office of the Federal Coordinator for Gulf Coast Rebuilding within the Department of Homeland Security and the Department of Housing and Urban Development (HUD). We also reviewed federal statutes, regulations, and notices governing the use of CDBG funds and interviewed officials in HUD’s Community Planning and Development division regarding their roles and responsibilities in allocating and distributing CDBG disaster funds to the states. To identify Louisiana’s and Mississippi’s priorities and how those priorities changed over time, we obtained and reviewed state planning documents and budget data from April 2006 to September 2008 and interviewed state program and budget officials responsible for administering and managing CDBG programs in Louisiana and Mississippi. We assessed the reliability of the budget data by reviewing the data for completeness and internal consistency, verified totals, and interviewed state officials responsible for preparation of budget reports. We observed changes in states’ budget data format and categories over time. For example, each state categorized its unallocated amount of CDBG funds differently and changed the reporting format between 2006 and 2008. To present the data from both states in a common set of budget categories, we consolidated periodic reports to obtain cumulative values and collapsed or disaggregated budget categories. We also consulted with state officials to verify interpretation of budget categories and reporting periods, to verify identification of instances where reporting formats changed, and to obtain confirmation that our reformulation of categories and amounts were acceptable. Note that periods covered are not exactly the same, but the difference in periods covered does not exceed one month. We determined that the data were sufficiently reliable for the purposes of this report. To determine what challenges states faced with their housing recovery programs, we relied primarily on testimonial evidence from key federal officials at HUD headquarters in Washington, D.C.; HUD field offices in New Orleans, Louisiana and Jackson, Mississippi; the Office of the Federal Coordinator for Gulf Coast Rebuilding; and FEMA, as well as key state officials in Louisiana and Mississippi. We corroborated testimonial evidence with documents and data that we received from key federal and state officials including federal guidance and regulations related to HUD’s CDBG program and FEMA’s Hazard Mitigation Grant Program (HMGP), relevant environmental statutes and regulations, state planning documents, state program and budget data, and intergovernmental correspondence. We also interviewed staff from ICF International—the contractor Louisiana hired to manage the state’s Road Home housing program. To examine the human capital challenges Louisiana and Mississippi encountered and their efforts to address those challenges, we interviewed state program and budget officials responsible for administering and managing CDBG disaster funds. We obtained and analyzed information on state agency CDBG budgets, staffing levels, and organizational changes undertaken by the two states in the aftermath of the 2005 Gulf Coast hurricanes. We reviewed reports completed by the HUD Inspector General and HUD’s Community and Planning Development division and interviewed key staff to capture their observations. In addition, we reviewed relevant congressional statements and testimonies and coordinated our work with the HUD Inspector General and with state audit offices. We also drew upon previous work we have conducted on Gulf Coast rebuilding efforts, emergency response, capacity issues and CDBG-funded disaster programs. In Louisiana at the state level, we spoke with officials at the Louisiana Office of Community Development (OCD), which was the official grantee of HUD CDBG disaster funds for the state. Within OCD, we met with officials in the Disaster Recovery Unit, which was the agency division responsible for administering and managing the state’s share of CDBG disaster recovery funds. We met with officials at the Louisiana Recovery Authority, which served as a policymaking and coordinating body for recovery efforts throughout the state. We also met with officials in the Governor’s Office of Homeland Security and Emergency Preparedness, which was the agency responsible for administering FEMA HMGP funds provided to the state for mitigation projects. In addition, we met with key staff in the Office of the Louisiana Legislative Auditor to discuss their past and ongoing work evaluating the state’s housing recovery program and their observations on OCD’s human capital challenges. In Mississippi at the state level, we spoke with officials at the Mississippi Development Authority (MDA), which was the official grantee of HUD CDBG disaster funds for the state. We met with key staff in the Governor’s Office of Recovery and Renewal, which served as a policymaking and coordinating body for recovery efforts throughout the state. Also, we met with officials at the Mississippi Emergency Management Association, which is the entity responsible for administering FEMA HMGP funds provided to the state for mitigation projects. In addition, we met with the Mississippi Office of the State Auditor to discuss their observations of the state’s housing recovery program and their relationship with the HUD Inspector General’s office on audits of MDA’s human capital capacity. We conducted this performance audit from June 2007 through April 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We requested comments on a draft of this report from the Department of Housing and Urban Development and the Department of Homeland Security. We received written comments from HUD, which are included in appendix II. DHS provided only technical comments, which were incorporated as appropriate. We also provided drafts of the relevant sections of this report to state officials in Louisiana and Mississippi and incorporated their technical comments as appropriate. Stanley J. Czerwinski, (202) 512-6806 or [email protected]. Major contributors to this report were Michael Springer, Assistant Director; David Lutter; Susan Mak; and Leah Q. Nash. Jessica Nierenberg, Melanie Papasian, Brenda Rabinowitz, and A.J. Stephens also made key contributions to this report.
Almost 4 years after the 2005 Gulf Coast hurricanes, the region continues to face daunting rebuilding challenges. To date, $19.7 billion in Community Development Block Grant (CDBG) funds have been appropriated for Gulf Coast rebuilding assistance--the largest amount in the history of the program. GAO was asked to report on (1) how Louisiana and Mississippi allocated their shares of CDBG funds, (2) what difficulties Louisiana faced in administering its housing recovery program, and (3) what human capital challenges Louisiana and Mississippi encountered and the efforts taken to address those challenges. GAO interviewed federal and state officials and reviewed budget data, federal regulations, and state policies and planning documents. Louisiana and Mississippi received the largest shares of CDBG disaster funds and targeted the majority toward homeowner assistance, allocating the rest to economic development, infrastructure, and other projects. Between 2006 and 2008, Louisiana's total allocation devoted to housing increased from 77 to 86 percent while Mississippi's decreased from 63 to 52 percent as the state focused on economic development. With homeowners as the primary focus, Louisiana initially adopted a plan that linked federal funds to home reconstruction and controlled the flow of funds to homeowners, while Mississippi paid homeowners for their losses regardless of their intentions to rebuild. This helped Mississippi avoid challenges that Louisiana would encounter, but with fewer assurances that people would actually rebuild. Louisiana's approach to housing recovery created a program that incorporated certain elements from two different models--compensation and rehabilitation--funded with multiple federal funding streams. While there is no written guidance that distinguishes between the two models, Housing and Urban Development (HUD) explained the major differences. In a rehabilitation model, funds are used explicitly for repairs or reconstruction, requiring site-specific environmental reviews. In contrast, a compensation program disburses funds directly to homeowners for damages suffered regardless of whether they intend to rebuild and does not trigger site-specific environmental reviews. Federal guidance was insufficient to address Louisiana's program and funding designs. Two major problems stemmed from the state's approach. First, HUD and the state disagreed as to whether the incremental disbursement of funds subjected homeowners' properties to environmental reviews. Despite many iterations of the program, HUD ordered a cease and desist of the program, leading the state to abandon its original plans and issue lump-sum payments to recipients. Continual revision and re-submittal of the design contributed to a 12-month evolution of the program. Second, conflicting federal determinations hindered coordination of CDBG and the Federal Emergency Management Agency's (FEMA) Hazard Mitigation Grant Program (HMGP) funds. According to state officials, the Federal Coordinator for Gulf Coast Rebuilding advised them to use most of the HMGP funds to acquire properties through their housing recovery program. FEMA rejected this plan, in part, because it determined that the program gave preference to the elderly. However, HUD is subject to similar legal requirements and did not find the program discriminatory. Louisiana changed its plans and used HMGP funds for a home elevation program. In sum, it took FEMA and the state over a year to reach agreement, delaying assistance to homeowners. In the immediate aftermath of the 2005 hurricanes, Louisiana and Mississippi lacked sufficient capacity to suddenly administer and manage CDBG programs of such unprecedented size. Both states created new offices to direct disaster recovery efforts and hired additional state agency staff and private contractors to implement homeowner assistance programs.
The Head Start program was established in 1965 to promote the school readiness of low-income children by enhancing their cognitive, social, and emotional development by providing a range of individualized services to pre-school aged children and their families. The program is overseen by ACF, which awards grants directly to a network of about 1,600 public and private nonprofit and for-profit agencies to help pay for health, educational, nutritional, social, and other services to primarily low-income children from birth to age 5, and their families. ACF monitors the success of local agencies that receive Head Start grants in meeting Head Start program goals and complying with program requirements by conducting on-site monitoring reviews of grantee programs every 3 years; administering an extensive, annual PIR survey of grantees; and reviewing required financial reports and annual audit reports. Reviewers assess Head Start grantee compliance with all program requirements, including those specified in the Improving Head Start Act, the Head Start Program Performance Standards, and other relevant federal, state, and local regulations. These requirements consist of administrative, financial management, and other standards, such as using age-appropriate materials to help children learn to recognize letters and numbers, and providing safe play areas. By law, each Head Start grantee must receive a full review at least once every 3 years. New grantees must receive a full review after completion of their first year of providing Head Start services and at least once every 3 years thereafter. ACF’s policy is to conduct these reviews on-site. Except for new grantees, Head Start grantees are reviewed on a rotating basis, and approximately one-third of all grantees are monitored each year. Reviews are conducted by a team of reviewers led by a federal Head Start program specialist from one of ACF’s 10 regional offices. In our February 2005 report, we identified a number of weaknesses in ACF’s oversight of Head Start grantees. Specifically, we found that ACF did not have a strategy for bringing information from its various monitoring processes together in order to comprehensively assess Head Start program risks, and identified problems with each of its strategies for monitoring grantees. We found that ACF did not have procedures to ensure that on-site reviewers performed their responsibilities in accordance with established guidelines or to ensure that managers and staff in ACF regional offices were held accountable for the quality of the on-site reviews. We also found that ACF did not have procedures for independently verifying data submitted by grantees in its annual PIR survey, which, in addition to providing information about grantee performance, is used to provide information to Congress and the public about important program characteristics, such as program design and staffing, and numbers and characteristics of children enrolled and attending Head Start programs nationwide. Finally, we found that ACF made limited use of financial reports and audits to ensure that all grantees effectively resolved financial management problems and had made little use of its authority to terminate grantees that did not meet program, financial management, and other requirements, and fund new grantees to replace them. We made a number of recommendations to address the problems that we identified, including: 1. producing a comprehensive risk assessment of the Head Start 2. strengthening on-site reviewer training and certification procedures; 3. developing a more consistent approach to conducting on-site reviews 4. implementing a quality assurance process that ensures on-site reviews are conducted within established guidelines and ACF managers are held accountable for the quality of on-site reviews; 5. ensuring the accuracy of PIR survey data by independently verifying key data submitted by grantees, or ensuring that grantees have systems in place to collect and report accurate, verifiable data; 6. making greater use of available information on the status and use of 7. taking steps to obtain competition for grants that are being refunded if it is determined that current grantees have failed to meet program, financial management, or other requirements. In 2006, ACF reorganized its regional offices in order to streamline program operations. (See fig. 1.) Currently, ACF regional program staff report directly to its central program offices, rather than to regional office administrators. The regional administrators no longer have direct authority to manage individual program activities. As a result, Head Start program specialists are directly accountable to central office management. Also, financial management specialists, who monitor financial management of all ACF grants, including Head Start grants, are now directly accountable to the central Office of Grants Management, which is located within the ACF Office of Administration. The Office of Administration provides support to ACF’s program offices on a range of administrative issues, such as managing personnel, information resources, procurement, and grants. The Office of Grants Management carries out the Office of Administration’s grants administration duties and provides leadership and technical guidance to ACF program and regional offices on grant operations and grants management issues. The Head Start program was revised and reauthorized in the December 2007 Improving Head Start Act. Prior to this, the program was last reauthorized in 1998, for fiscal years 1999 through 2003. In the years between the 1998 and 2007 reauthorizations, the program remained funded through the annual appropriations process. ACF has not undertaken a comprehensive assessment of risks to the federal Head Start program, despite our 2005 recommendation, and little progress has been made in ensuring that the data from its annual PIR survey of grantees, which could facilitate such an assessment, are reliable. Although ACF has two systems in development to address risk assessment, neither system provides for a comprehensive, programwide risk assessment for the Head Start program. Further, both systems depend to some extent on unreliable data from the annual PIR survey of grantees. Although ACF has known about the problems with PIR survey data, it has done little to address them. ACF has not undertaken a comprehensive assessment of risks to the federal Head Start program. Risk assessment is one of five internal control standards that together provide the foundation for effective program management and help government program managers achieve desired results through effective stewardship of public resources. To carry out a comprehensive risk assessment, program managers need to identify program risks from both external and internal sources, estimate the significance of these risks, and decide what steps should be taken to best manage them. Although such an assessment would not assure that program risks are completely eliminated, it would provide reasonable assurance that such risks are being minimized. For the Head Start program, this might include anticipating and developing strategies to minimize the impact of changes in resources available to oversee and assist local grantees, or to develop initiatives to address social and demographic changes that may result in changing service needs for families with young children. In 2005, we reported a similar finding, noting that despite efforts to collect information and assess risks, ACF did not have a strategy for bringing this information together in order to comprehensively assess program risks, and recommended that ACF produce a comprehensive risk assessment of the Head Start program. To address our 2005 finding, ACF has attempted to bring together information about local programs and to assess risk on a grantee-by-grantee basis; however, it has yet to develop a systematic approach to assessing risks that may arise from other sources and, if undetected, could hinder ACF’s ability to achieve Head Start program objectives, or for developing strategies to prioritize and address risks proactively. ACF is in the process of developing two systems that may help it assess programwide risks; however, significant limitations or uncertainty exist with respect to each that could constrain ACF’s ability to use them to conduct a meaningful risk assessment. The first system, the refunding analysis system, is a process whereby ACF evaluates the performance of individual grantees each year before it refunds, or renews, their grants. The second system, the Head Start Enterprise System (HSES), is still under development. As envisioned by ACF, the HSES may one day integrate all available Head Start program data into a single, interactive database that may one day facilitate analysis across many program areas. The first system is limited in how it could be used for risk assessment, and the completion of the second system is uncertain. The refunding analysis system is limited because it assesses risk from only one source—grantee performance—and does not assess other types of risk, such as inadequate procedures for ensuring that staff follow policies for monitoring grantee activities or for minimizing payments that are not in accordance with program requirements. Although the planned HSES has the potential for assessing a wider range of potential program risks, it has been in development for at least 4 years, and it is unclear when or how it will actually be used. The refunding analysis system is an evolving system for evaluating grantee financial management and performance annually and determining which grantees require additional assistance. Each month, regional program staff who are responsible for overseeing Head Start grantees bring together and assess all available information about grantees that are scheduled to re- apply for their grants. The information is reviewed by grants management staff and Head Start central office staff. Although the refunding analysis system is intended to provide analysis of grantee performance prior to refunding, it serves primarily as a means of identifying grantees that need assistance and not as a means of discontinuing grants for underperforming grantees. Although the refunding analysis system allows grantees that are considered high risk to be brought to the attention of Head Start program managers, it does not allow for a broader assessment of other sources of risk, such as those we previously identified. For example, in 2005, we identified improper payments to contractors as a source of potential risk for the Head Start program. Under the Improper Payments Information Act of 2002, agencies are required to annually identify programs and activities that may be susceptible to significant improper payments; provide Congress with the annual estimated amount of improper payments; and, for programs and activities with estimated improper payments that exceed $10 million, report on actions taken to reduce improper payments. In addition, the Improving Head Start Act requires the Secretary of HHS to submit a report to the appropriate congressional committees certifying that HHS has completed a risk assessment to determine which ACF programs are at significant risk of making improper payments, and describing the actions HHS will take to reduce these improper payments. Since fiscal year 2004, ACF has taken limited action to minimize improper payments by collecting data on payments to grantees that do not meet the requirement that at least 90 percent of the children who are enrolled in Head Start programs must be from low- income families. However, ACF officials stated that, due to resource constraints, they do not have plans to track other types of improper payments, such as overpayments to grantees that serve fewer children than are reported to be enrolled in their local Head Start programs and excessive compensation paid to Head Start program staff. Overpayments to grantees with programs that have enrollment below their funded levels are not uncommon. In April 2007, the HHS Inspector General reported that in the 2006 program year, fewer than half (40 percent) of Head Start grantees were fully enrolled and that enrollment levels by grantee ranged from full enrollment to as low as 68 percent of funded enrollment. Overall, this translated into 5 percent of Head Start slots that were funded but not filled. The HHS Inspector General also found that only 11 percent of grantees had reported enrollment levels to ACF that matched their actual enrollment levels, and questioned the ability of 26 percent of grantees to maintain accurate attendance records and to determine enrollment accurately. The HHS Inspector General has also conducted a series of audits of Head Start programs that identified unreasonable levels of compensation to Head Start program executives. Although ACF requires grantees to provide information about Head Start program staff salaries and compensation as part of their annual refunding applications, ACF has not estimated the extent to which excessive compensation may be a problem, or verified the extent to which information provided by grantees is accurate. In developing its new systems, ACF plans to use data from its annual PIR survey of grantees, which several studies over the past 12 years have determined to be unreliable. Specifically, the refunding analysis system uses PIR data on grantees’ enrollment of children with disabilities and provision of medical and dental treatments as factors when determining the risk that an individual grantee will fail to meet program standards. In addition, ACF plans to use the PIR database in the HSES. Our February 2005 report found discrepancies in the 2003 PIR database. During our current review, we conducted similar tests to check for data consistency in the 2006 PIR database and found that it continues to provide some inconsistent data. Our findings are consistent with a more recent study funded by ACF that was undertaken in response to our 2005 recommendation to address the accuracy of PIR data. Specifically, ACF’s 2007 study found that the PIR data reported by individual Head Start grantees are frequently inaccurate and may be unreliable for grantee monitoring or risk assessment purposes. The 2007 study found that data submitted by grantees for the PIR survey may be unreliable due to the length and complexity of the survey. The survey includes over 130 questions and provides data on program operations, enrollment, staff and their qualifications, services for children and families, and other information used for policymaking and accountability. All Head Start grantees are required to submit PIR data every year. ACF’s 1995 study on PIR data validity also suggested that the length of the survey reduced the accuracy of PIR data submitted by grantees, and its 2007 study further suggested that instructions provided to grantees for completing the PIR may be unclear and could lead to grantees submitting incorrect data. Based on our survey of program directors, we estimate that over half of all Head Start grantees spend more than 24 hours to complete the PIR. In our survey, we solicited comments from program directors about potential obstacles to completing the PIR and they cited various obstacles, such as unclear instructions and questions that may change from one year to the next. In addition to using the PIR data to assess progress of individual grantees, ACF aggregates the PIR data to provide national, regional, and state-level statistics on Head Start. ACF uses the aggregate data to report to Congress and the public on the performance of the Head Start program. Head Start grantees report using the PIR survey to help manage their programs. A majority of grantees report using the PIR survey to help ensure compliance with federal laws and regulations, compare the performance of their program to national or regional benchmarks, and observe trends in their own performance over time. Moreover, the Improving Head Start Act requires ACF to use the PIR as one determinant of whether grantees meet program and financial management requirements and standards, as part of a new system for renewing Head Start grants. Reliance on systems that contain inaccurate data can mislead policymakers and program managers and result in inappropriate decisions. In its 2007 study, ACF asserted that the national statistics produced by the PIR present a reasonable estimate of the services provided by the Head Start program even though the data collected from individual grantees are unreliable. However, if there are actual errors in grantee-reported data, the nationally-reported PIR statistics might present a false picture of the services provided by the Head Start program. For example, the study estimated that grantees over-reported the number of children that received medical exams by 3.6 percentage points and noted that problematic record-keeping on the part of grantees or physicians might account for some of the discrepancy between the PIR statistics and the study’s estimates. Given that the Head Start program provides services to more than 900,000 children, over-reporting in the number of children that received medical exams by 3.6 percentage points could mean that as many as 32,000 fewer children may be receiving medical exams than the nationally-reported PIR statistics indicate. The 2007 study found that ACF lacked procedures to independently verify the accuracy of the data. Although ACF has built internal consistency checks into the PIR database, these checks will not detect inaccurate data as long as the grantee reports data consistently throughout its PIR report. In our 2005 report, we also found that ACF lacked a data verification process and recommended that ACF either (1) independently verify key data submitted through the survey or (2) ensure that grantees have systems in place to collect and report accurate, verifiable data. The 2007 study offered several recommendations for enhancing the reliability of PIR data. Most of the study’s recommendations would address the accuracy of data reported by individual grantees, to allow ACF to better assess grantee performance. For example, the study recommends that ACF perform regular validation of the PIR data submitted by grantees, possibly during the triennial on-site monitoring reviews. Alternatively, one recommendation focuses on ACF’s use of the PIR as a tool for generating national statistics, and suggests accomplishing this through a more limited survey to a random sample of grantees, thereby reducing the overall burden on grantees. ACF officials told us that they have not yet developed any plans to implement the specific recommendations from its 2007 study. ACF has implemented several changes aimed at improving the quality and consistency of its on-site reviews of Head Start grantees, in response to our 2005 recommendations. These changes directly address our previous findings regarding the lack of procedures for ensuring that review teams were following on-site review protocols or for ensuring that managers and staff in ACF regional offices are held accountable for the quality of the reviews. Specifically, ACF has implemented a more rigorous process for certifying reviewers and new processes to improve the consistency of reviews, and is working to establish a system for evaluating reviews on an ongoing basis. ACF has implemented more rigorous procedures for ensuring that a sufficient number of qualified reviewers are available to help conduct required on-site reviews of Head Start programs. Danya International, Inc. (Danya) manages the on-site review process under a contract with ACF and monitors whether reviewers meet all of the necessary qualifications, such as having a Bachelor’s degree and at least 3 years of work experience in a field related to early childhood development or public program management, and whether they comply with ongoing training and performance requirements. Danya’s polices require that reviewers who do not satisfactorily meet the necessary qualifications or who fail to comply with ongoing requirements for reviewers cannot participate in an on-site review. According to Danya, 174 reviewers were placed on hold or removed from the reviewer pool in fiscal years 2006 and 2007 because they did not meet the necessary qualifications. Procedures for recruitment of on-site reviewers are more systematic than in the past. Previously, ACF relied on informal networking among individuals affiliated with the Head Start program to recruit new reviewers. Now, Danya procedures provide for an ongoing assessment of the composition of the current reviewer pool and the numbers of reviewers needed to carry out reviews in a given year, and a targeted recruitment strategy to address any shortfalls in the numbers or types of reviewers needed. For example, to address a shortfall of reviewers who speak Spanish or have experience in Native American issues, Danya representatives may attend conferences to recruit new reviewers with needed special skills and experience, such as conferences sponsored by the National Hispanic Head Start Association or the National Indian Education Association. Each month, a three-person panel of qualified reviewers screens all new applications to determine which applicants appear to have the required skills and experience. Applicants who meet the initial screening requirements are asked to provide more detailed employment and education information, which is then verified by Danya. In addition to meeting the basic requirements for qualifying to become an on-site reviewer, Danya procedures require that new and current reviewers alike must meet minimum training and performance requirements before they are assigned to a review team. New reviewers are required to complete a basic training course and must successfully complete a Head Start monitoring review as a trainee under the supervision of an experienced coach. All reviewers are required to complete any new training that may be specified during a given fiscal year, stay informed about changes to the review process, and successfully complete online tests in writing and computer literacy. Also, members of the reviewer pool who are employees of Head Start programs, known as peer reviewers, who work for programs that are in serious noncompliance with program requirements are not eligible to participate in on-site reviews. ACF also requires the review team leader and report coordinator to complete evaluations for all members of the review team; in addition, each team member must assess the performance of several colleagues on the team. If a reviewer’s performance is rated as unsatisfactory by two or more raters on a single review, Danya will follow up to verify the raters’ assessments, if necessary, and consult with ACF to decide whether the reviewer should be placed on probation and allowed to participate in an additional review for further assessment, or whether the reviewer should be dropped from the reviewer pool. In fiscal year 2006, ACF implemented new procedures to improve the quality and consistency of on-site reviews. Concerns about the lack of independence of on-site review team leaders prompted changes in how team leaders are assigned. As a result, each review team is now led by ACF program staff from a region other than the grantee’s home region. Concerns regarding inconsistencies in the findings cited by different teams of reviewers for the same grantees led to changes in how findings are developed and reported. The review teams collect only data and facts during their review: they do not draw conclusions from the information they gather or prepare the final report. Instead, reviewers record their observations in a centralized, Web-based system, noting any potential areas of noncompliance. The reviewers’ notes are then submitted for centralized review by ACF. The draft review report is prepared centrally and includes findings of noncompliance with program requirements or deficiency—a more serious form of noncompliance that can lead to termination of the grant. The draft report is then forwarded to the grantee’s home region and to the review team leader’s region for review and comment. Any comments received are incorporated into the report as necessary to clarify the reviewers’ observations, and the final report is signed by the Director of the Office of Head Start and then issued to the grantee. ACF has also revised its on-site review protocols to encourage a more efficient and uniform approach to conducting on-site reviews. In fiscal year 2007, ACF implemented a more uniform set of on-site review protocols, under which every grantee is asked the same questions relating to 10 distinct program areas, such as health services, fiscal management and education and early childhood development services. The protocols encourage a more targeted assessment of whether or not grantees are in compliance with program regulations, and no longer provide for reporting about program strengths, such as the provision of services that extend beyond what is required by regulation. Prior to the on-site review team’s visit, grantees receive a 30-day notification and are asked to prepare a uniform set of documents for reviewers to examine before meeting with the grantee. During the review, reviewers are required to enter the information they gather into a central, Web-based system, which facilitates sharing evidence among reviewers and tracking whether reviewers have completed all necessary tasks. ACF has also implemented uniform corrective action periods and mandatory follow-up visits when grantees are found to be either noncompliant or deficient. At the time of our previous review, ACF relied on grantees to self-certify that they had corrected any problems identified during audits or on-site reviews and only made follow-up visits to grantees that had been found deficient. Now, when grantees are found to be noncompliant with Head Start program regulations, ACF allows them 90 days to resolve the underlying problems and bring their programs into compliance. If found deficient, grantees are given 6 months to fully resolve deficiencies before ACF will take action to terminate their grants, though ACF will allow additional time if a grantee can justify its request for an extension. At the end of the relevant corrective action period, ACF conducts follow-up reviews in order to verify that grantees have resolved the problems identified during the initial on-site review. ACF officials said that most grantees have adjusted well to the new on-site review process, although there were some initial negative reactions from the grantees when the new procedures were first introduced. ACF officials told us that they encourage grantees to provide feedback on the review process and on the conduct of on-site reviewers. They have also established formal procedures for review team leaders to report violations of the on-site code of conduct by reviewers and to replace reviewers while the team is on-site, if necessary. Based on our survey results, we estimate that 9 percent of grantees encountered problems with reviewers during their most recent review that required outside intervention. For example, one respondent reported several problems, including trouble scheduling the review, a reviewer with a conflict of interest, and overly aggressive reviewers. Our survey results suggest that grantees have mixed views about the revised on-site monitoring procedures. Generally, directors of programs reviewed under the revised procedures had positive views of the reviewers but had less positive views of specific changes in the procedures and the extent to which their most recent on-site review had led to improvements in their Head Start programs. Specifically, most directors of programs that were reviewed under the revised procedures were very or somewhat satisfied with how their most recent on-site reviews were conducted. Most directors also found that the review teams had adhered to the new protocols and that the review teams demonstrated an understanding of program requirements. They were almost evenly split over whether having program specialists from outside their home region lead their review had a positive or negative effect on the review process, but about three- quarters thought that the focus on reporting only noncompliance had a negative or very negative effect on the on-site review process. When asked the extent to which their most recent reviews led to program improvements in each of the 10 program component areas, directors generally reported that the review led to few or no improvements in most program areas, with the exception of health services and program design and management, which directors generally thought had improved to a greater extent than other program areas as a result of their most recent reviews. Although we cannot directly attribute any improvements in Head Start program management to these changes, our analysis of on-site review data does suggest that there may have been some improvements in Head Start program management since our last review. In 2005, we reported that about 76 percent of all grantees that had on-site reviews in 2000 had been found to be out of compliance with one or more standards in the areas of fiscal management, program governance, or record-keeping and reporting. We also reported that, in subsequent reviews, 53 percent of those same grantees had been cited again for problems in these same program areas. Our more recent analysis of on-site review data shows that, for grantees reviewed in 2003, about 71 percent were found to be out of compliance with standards in the same three areas of program management. In 2006, 29 percent of grantees reviewed were cited again for problems in these three areas. In response to our 2005 recommendation, ACF has begun to implement procedures for assessing the quality of on-site reviews. In 2006, ACF conducted a one-time study of the consistency of on-site reviews. This study revealed some differences in the findings identified by different teams of reviewers that visited the same grantees. However, ACF determined none of the non-duplicated preliminary findings identified by re-review teams were serious enough to meet the regulatory definition of a program deficiency. The Improving Head Start Act further requires that on-site reviews are to be conducted in a manner that includes periodic assessments of the reliability of the process. ACF is currently working to establish an ongoing system for evaluating its on-site review process that would address this new requirement. To improve Head Start grantee performance, ACF uses data from multiple sources to identify underperforming Head Start grantees. For example, it uses data to direct resources to grantees in need of training and technical assistance. It also uses data to identify high-risk grantees that may need additional oversight. However, ACF’s use of data to improve grantee performance has faced limitations. For example, ACF does not have clear criteria for determining which grantees need additional oversight. In addition, ACF had limits on its ability to obtain competition for grants prior to the recent reauthorization of Head Start. ACF uses data from multiple sources to track Head Start grantee performance and to identify grantees with program weaknesses. In addition to findings from on-site monitoring reviews, ACF also assesses grantees’ performance through analysis of their audit reports, financial reports, and other sources of information. One of the ways that ACF uses data to assess grantee performance is by calculating risk levels for each grantee, through its refunding analysis system. ACF determines grantee risk levels by using various indicators, such as findings from on-site monitoring reviews, turnover of key program staff, PIR survey data, and negative media coverage. After ACF identifies underperforming grantees, it targets resources from its training and technical assistance (T/TA) network to help these grantees address their program weaknesses. ACF gives grantees with deficiencies priority for receiving services from the T/TA network. Priority for receiving services is then given, in order, to other grantees in non- compliance or at risk for deficiencies, grantees new to providing Head Start services, and grantees with new directors or key staff. Grantees with deficiencies are sometimes offered on-site assistance to address their program weaknesses. The T/TA network also assists grantees through workshops, cluster training for groups of grantee program staff and management, presentations at local and national conferences, and other activities. For example, T/TA network staff members affiliated with ACF’s regional office in Atlanta have provided clustered training to grantees covering topics like literacy, domestic violence, and guidance related to ACF’s on-site monitoring review process. In addition, all grantees are required to submit an annual T/TA plan to ACF, identifying their T/TA needs for the coming year. Grantees can use multiple data sources to identify their T/TA needs, including on-site monitoring review reports and community assessments. ACF uses the refunding analysis system as an opportunity to identify program weaknesses before it reviews grantees’ annual grant applications. If the process identifies an underperforming grantee—those designated as “high risk”—ACF may decide to initiate a special on-site review to determine whether the grantee is deficient and whether the grantee should ultimately be terminated. If the grantee is deemed deficient, ACF can require the grantee to correct the deficiencies within specified time frames, or to begin a quality improvement process, after which ACF will assess whether the grantee has corrected all deficiencies. In all cases, grantees that do not correct identified deficiencies are subject to termination proceedings. However, ACF’s criteria for deciding which grantees are subject to the special on-site review are unclear. These decisions are typically made on an ad-hoc basis, which may result in grantees with similar problems receiving inconsistent levels of oversight. We have reported that consistency is an essential component to ensuring performance accountability in federal grants. Prior to enactment of the Improving Head Start Act, statutory provisions limited ACF’s ability to terminate underperforming grantees at the time an on-site review or the annual refunding analysis showed inadequate performance. ACF was required to grant priority to existing grantees when making funding decisions, unless ACF determined that the grantee failed to meet program, financial management, or other requirements established by ACF. However, before ACF could terminate a failing grantee and open the grant to competition from other prospective grantees, ACF was required to provide the grantee with official notice and an opportunity for a hearing on its termination—or convince the grantee to relinquish its grant. Moreover, if a grantee appealed ACF’s termination decision, ACF was required to pay the grantee’s legal fees until the hearing process was completed. Recent work related to grants management has pointed to competition for grants as a way to facilitate grant accountability. For example, the Domestic Working Group, chaired by the Comptroller General of the United States, cited grant competition as a key area of opportunity for improving grant accountability. It noted that grant competition promotes fairness and openness in the selection of grantees, and that agencies can better ensure that grantees have the capability to efficiently and effectively meet grant goals by incorporating evaluation criteria focused on factors indicative of success into the competition process. The recent reauthorization of Head Start amends some of the requirements and procedures for refunding Head Start grantees, including the introduction of time limits for Head Start grants. This will give ACF the ability to open competition for Head Start grants to additional grantees, thereby enhancing ACF’s ability to remove severely underperforming grantees from the program, on a regular basis. In light of federal budget limitations and increasing expectations for program accountability, ACF’s ability to demonstrate effective stewardship over billions of dollars in Head Start grants has never been more critical. Since our 2005 review, ACF has made significant improvements in its procedures for monitoring local Head Start programs. In particular, the agency has taken steps to bring together information about individual grantees from various sources in order to identify those that are struggling to meet Head Start performance standards, and to target assistance to help these grantees strengthen their programs. This risk-based approach is promising and provides a foundation for a more strategic, comprehensive approach to managing the Head Start program. Nevertheless, ACF’s current initiatives do not yet constitute a comprehensive plan for managing program risks. Without a more comprehensive approach to identifying risks, threats to ACF’s ability to achieve Head Start program objectives will likely go undetected until a problem arises. For example, undetected improper payments could result in a severe reduction in the funds available to pay for services for children. Although ACF has made progress since we last reported in 2005 toward strengthening its oversight of the approximately 1,600 local agencies that receive Head Start program grants, its systems for doing so depend in part on data that are unreliable. If ACF does not act to address the weaknesses in its data, it cannot depend on its new systems to provide it with reliable information on grantee performance. The lack of reliable information about local program activities further compromises ACF’s ability to manage risks by limiting its ability to understand whether problems are isolated or national in scope, as well as whether they arise from individual grantee failures or from weaknesses in the broader structure of the program itself. A lack of sound information also calls into question the credibility of ACF’s reporting to Congress and the American public on the services provided by the Head Start program. Even if ACF conducts a comprehensive risk assessment of the Head Start program and works to improve the accuracy of its data, it will still face challenges addressing risks posed by the most severely underperforming grantees. The annual refunding process could be used to link funding opportunities to performance. For example, if ACF develops clear and objective criteria for deciding which grantees are subject to a special on- site review, it could ensure that all grantees with similar problems receive similar levels of oversight. Without such criteria, special on-site reviews will continue to be conducted on an ad-hoc basis and, as a result, ACF may continue to fund poorly performing grantees that do not receive special on-site reviews without providing them the additional oversight they need. To improve its management and oversight of the Head Start program, we are making the following four recommendations to HHS’s Assistant Secretary for Children and Families: More fully implement our 2005 recommendation by developing a strategic and comprehensive approach to assessing Head Start programwide risks. A comprehensive, programwide risk assessment should take all program risks into account. Specifically, a comprehensive risk assessment should include an assessment of risks arising from external sources, such as social and demographic changes that may affect the availability or demand for Head Start program services, as well as from internal sources, such as underperforming grantees, differences in how regional offices implement program policies and procedures, or the availability of sound data to help manage the program. A comprehensive risk assessment should also include strategies for minimizing risks that could significantly limit the ability of ACF and grantees to help grantees deliver high-quality programs. Look for cost-effective ways to expand ACF’s efforts to comply with the Improper Payments Information Act of 2002 and to address our 2005 recommendation by collecting data on and estimating the extent of improper payments made for unallowable activities and other unauthorized purposes. These should take into account various aspects of the program and should not be limited to improper payments to grantees that enroll too many children from families that do not meet the program’s income eligibility requirement. Take additional steps to ensure the accuracy of PIR data by determining which elements of the PIR are essential for program management and focus resources on a streamlined version of the PIR that would be required annually of all grantees, with the responses verified periodically. If additional information is needed to produce national estimates of a wider range of Head Start program services, ACF should include the relevant, additional data items in an expanded version of the PIR, which could be administered to a random, representative sample of grantees each year. Develop clear criteria for determining which grantees require more thorough reviews—such as special, on-site reviews—as a result of its refunding analysis system. We provided a draft of this report to ACF for comment. The full text of these comments appears in appendix II. ACF agreed with two of our recommendations, and emphasized progress already made toward developing a comprehensive risk assessment process and toward reducing improper payments. Specifically, ACF noted that it plans to implement a programwide risk management process in early 2008. ACF also said that it has developed a new integrated data management system. While we agree that ACF’s planned programwide risk management process and integrated data management system are important initiatives that may facilitate programwide risk assessment, both systems have yet to be fully implemented and it remains to be seen how these systems will actually be used to proactively manage the Head Start program nationally. ACF also emphasized the progress it has made to reduce the frequency and amount of improper payments arising from participants’ ineligibility, and that its focus on eligibility is consistent with how ACF has implemented the Improper Payments Information Act of 2002 for other programs, and was approved by HHS and OMB management. We agree that ACF’s progress toward reducing the improper payment error rate due to Head Start program ineligibility from 3.6 percent in 2003 to 1.3 percent in 2006 is commendable, and acknowledge ACF’s commitment to reducing improper payments for the Head Start program. However, as noted in ACF’s comments, the agency’s improved monitoring efforts and planned risk management initiatives may help ACF to identify further areas for study related to improper payments, and we encourage ACF to pursue these areas, as practical. ACF agreed with our recommendation to review and streamline its annual PIR survey of grantees, noting that it will continue to use the results of its own PIR validation study to inform the design of future surveys, take steps to periodically verify grantee responses, and leverage technological improvements to capture program data more frequently and consistently. Finally, ACF said it will soon have the ability to define realistic criteria for determining which grantees require more thorough or special reviews, and noted that improvements in monitoring, and changes to the process for designating grantees resulting from reauthorization, should help enable it to do so. We’re encouraged that ACF said it should have both its risk management process and its integrated data management systems operational this year. Both systems will play an important role in its efforts to better target its oversight efforts. To answer our research objectives, we interviewed Administration for Children and Families (ACF) and Office of Head Start officials and their staff in Washington, D.C., and in the ACF regional offices in Philadelphia and Atlanta. We conducted interviews with staff from all of ACF’s 10 regional offices, and reviewed relevant documentation from each of these offices. We visited two regional offices—Philadelphia and Atlanta—to help us develop interview and data collection protocols and conducted telephone interviews with the remaining eight regional offices. To obtain grantees’ opinions on ACF’s oversight of the Head Start program, we administered a Web-based survey to a nationally- representative sample of Head Start and Early Head Start program directors. Our target population consisted of directors whose programs were the subject of on-site reviews from October 2005 through March 2007, and were reviewed under ACF’s newly revised on-site review process. Based on data supplied by ACF and the contractor responsible for coordinating the on-site reviews, we identified a total population of 598 Head Start grantees. From this population, we selected a stratified random probability sample of 329 grantees. We stratified the population based on whether or not the grantee has any delegate agencies and the ACF region in which the grantee operates. We also included in our sample all grantees that participated in ACF’s 2006 one-time study of the consistency of its on- site reviews. With this probability sample, each grantee had a nonzero probability of being selected, and that probability could be computed for any grantee. Each grantee selected in the sample was subsequently weighted in the analysis to account statistically for all the grantees in the population. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. All percentage estimates from this survey have margins of error of plus or minus 10 percent or less, unless otherwise noted. We developed our survey questions based on feedback that we received during three focus group sessions that we conducted with Head Start program directors whose programs were reviewed under the revised on- site review process. After we drafted the survey questionnaire, we conducted a series of six pretests by telephone to check that (1) the questions were clear and unambiguous, (2) terminology was used correctly, and (3) the information could feasibly be obtained. We made changes to the content and format of the questionnaire as necessary during the pretesting process. The survey was fielded between July 12, 2007, and August 20, 2007. Of the 329 program directors in our sample, 261 responded—for a response rate of 79 percent. To assess the changes ACF made to its on-site monitoring review process, we met with the contractor responsible for coordinating the reviews, as well as with representatives from the National Head Start Association, and reviewed the results of a study conducted by ACF to evaluate its on-site review process. We also analyzed the extent to which ACF’s on-site monitoring reviews identified repeat noncompliance by Head Start grantees, using 2003 and 2006 data from the on-site review database that is currently maintained by Danya International, Inc. (Danya). We used four data elements from this database for our analysis: Grantee ID, Fiscal Year, Review Type, and Core Area. We further limited our analysis to three core areas: program governance, record-keeping and reporting, and fiscal management. To assess the reliability of these data, we relied on both our 2005 assessment of the reliability of 2003 data, and performed additional tests. Specifically, we conducted electronic testing of both the 2003 and 2006 data and found no missing or out-of-range entries for any of the four elements that we used, and obtained additional information about the 2006 data from Danya. Based on our previous and updated assessments, we find the 2003 and 2006 data sufficient for the purpose of this engagement. We analyzed the on-site review data for 2003 and 2006 to obtain, for each year, the numbers of (1) all grantees reviewed, (2) grantees receiving triennial or first-year reviews only, and (3) grantees cited for deficiency or noncompliance in one, two, or all three of the core areas of interest (program governance, record-keeping and reporting, and fiscal management). To obtain the number of grantees with repeat noncompliance, we computed the number of grantees cited for noncompliance or deficiency in both 2003 and 2006, in each of the same core areas. The computer code used to derive these numbers was subjected to independent review within GAO and was found to be accurate. We reviewed ACF studies from 1995 and 2007 on the validity and accuracy of PIR data, the results of which are discussed in this report. We also conducted consistency tests on PIR data from the 2006 PIR database, similar to the tests that we conducted on the 2003 PIR database for our 2005 report on Head Start oversight. Overall we conducted 29 tests, across all three sections of the PIR database: Enrollment and Program Operations, Program Staff and Qualifications, and Child and Family Services. In 9 of our 29 tests, PIR data contained inconsistent data that did not sum to the expected totals. In each of the nine tests that failed, less than 1 percent of the 2,696 data items failed. Our findings indicate that the PIR database contains some inconsistent data. Our work was conducted from February 2007 through December 2007 in accordance with generally accepted government auditing standards. The following individuals made important contributions to this report: Bill J. Keller, Regina Santucci, Christopher W. Backley, Susannah Compton, James Ashley, Cindy Gilbert, Alison Martin, George Quinn, Jerry Sandau, Elizabeth Wood, Kimberly Brooks, Jacqueline Nowicki, Doreen Feldman, Alexander Galuten, and James Rebbe. Improper Payments: Incomplete Reporting under the Improper Payments Information Act Masks the Extent of the Problem. GAO-07-254T. Washington, D.C.: December 5, 2006. Grants Management: Enhancing Performance Accountability Provisions Could Lead to Better Results. GAO-06-1046. Washington, D.C.: September 29, 2006. Head Start: Progress and Challenges in Implementing Transportation Regulations. GAO-06-767R. Washington, D.C.: July 27, 2006. Head Start: Comprehensive Approach to Identifying and Addressing Risks Could Help Prevent Grantee Financial Management Weaknesses. GAO-05-176. Washington, D.C.: February 28, 2005. Head Start: Better Data and Processes Needed to Monitor Underenrollment. GAO-04-17. Washington, D.C.: December 4, 2003. Head Start: Increased Percentage of Teachers Nationwide Have Required Degrees, but Better Information on Classroom Teachers’ Qualifications Needed. GAO-04-5. Washington: D.C.: October 1, 2003. Managing for Results: Efforts to Strengthen the Link between Resources and Results at the Administration for Children and Families. GAO-03-9. Washington, D.C.: December 10, 2002. Strategies to Manage Improper Payments: Learning from Public and Private Sector Organizations. GAO-02-69G. Washington, D.C.: October 2001. Standards for Internal Control in the Federal Government. GAO/AIMD-00-21.3.1. Washington, D.C.: November 1999.
In February 2005, GAO issued a report that raised concerns about the effectiveness of the Department of Health and Human Services (HHS) Administration for Children and Families' (ACF) oversight of about 1,600 local organizations that receive nearly $7 billion in Head Start grants. GAO was asked to report on (1) ACF's progress in conducting a risk assessment of the Head Start program and ensuring the accuracy and reliability of data from its annual Program Information Report (PIR) survey of grantees, (2) efforts to improve on-site monitoring of grantees, and (3) how data are used to improve oversight and help grantees meet program standards. For this report, GAO surveyed a nationally representative sample of Head Start program directors and interviewed ACF officials. GAO also reviewed ACF studies on the validity of PIR data and conducted tests of data from the 2006 PIR database. ACF has not undertaken a comprehensive assessment of risks that may limit Head Start's ability to meet federal program objectives, despite GAO's 2005 recommendation, and little progress has been made to ensure that the data from its annual PIR survey of grantees, which could facilitate such an assessment, are reliable. To conduct a comprehensive risk assessment, ACF needs to identify external and internal risks, estimate their significance, and decide how to best manage them. While ACF says it is working to establish two systems to address programwide risk, our analysis suggests that these systems fall short of that goal. The first system, by which ACF assesses grantees before providing new funds each year, only assesses risk posed to the program by poorly performing grantees and does not allow for a broader assessment of other sources of risk, such as improper payments to contractors. The second system, a new, integrated management information system, has been in development for over 4 years and it is not clear how it will facilitate a more comprehensive risk assessment for the Head Start program. Both initiatives depend, in part, on data from the annual PIR survey of grantees, which have been found to be unreliable. ACF has taken steps to improve oversight of Head Start grantees by implementing a more rigorous process for certifying reviewers who conduct on-site monitoring visits, implementing new processes to improve the consistency of reviews, and working to establish a system for evaluating reviews on an ongoing basis. Now, ACF verifies reviewers' qualifications and requires them to pass online tests in writing and computer literacy. Reviewers must also complete ongoing training and are evaluated by their peers at the end of each review. ACF has also taken a number of steps to improve the consistency and objectivity of reviews, including developing a Web-based data collection tool to facilitate information gathering, assigning review team leaders from outside the grantee's home region to increase independence, and centralizing the review and preparation of monitoring reports. ACF is also working to establish an ongoing system for evaluating its on-site review process. ACF uses data to track grantee performance and target assistance to underperforming grantees, but weaknesses may have hindered these efforts to improve grantee performance. For example, ACF does not have clear criteria for determining which grantees need additional oversight as part of its refunding analysis. Such decisions are made on an ad-hoc basis, which may result in grantees with similar problems receiving different levels of oversight. Moreover, prior to the December 2007 reauthorization of Head Start, ACF was limited in its ability to increase competition for grants to replace underperforming grantees. Under the new law, ACF will have more flexibility to open competition for Head Start grants to other prospective grantees when current grantees fail to deliver high-quality, comprehensive Head Start programs.
In general, individuals seeking compensation for a vaccine-related injury or death must first file a petition making a claim for compensation under Several federal agencies—HHS, DOJ, VICP before suing in civil court.and USCFC, are involved in administering VICP, and Treasury manages the trust fund which funds compensation for successful claims. VICP includes a vaccine injury table that lists the vaccines covered by the program and the injuries associated with each those vaccines. (See app. I for the table.) Vaccines are added to the list covered by the program after the Centers for Disease Control and Prevention recommends them for routine administration to children and they are made subject to an excise tax that funds the Vaccine Injury Compensation Trust Fund. When individuals submit a claim for an injury listed on the table (called an on- table injury), they do not need to prove that the injury was caused by the vaccine. Instead, if they submit documentation showing that they received a particular vaccine and that they sustained the associated covered injury within the time interval specified on the table, they may receive compensation based on a presumption of causation (unless there is evidence that the injury is due to other factors).compensation may submit claims for injuries not listed on the table (called Individuals seeking off-table injuries) but they need to demonstrate by the preponderance of the evidence that the vaccine caused the alleged injury. HHS has authority to promulgate rules to modify the vaccine injury table when certain criteria are met. HHS is also required to amend the table to include a vaccine within 2 years of the Centers for Disease Control and Prevention’s recommending it for routine administration to children. The Advisory Commission on Childhood Vaccines, which was established by the act creating the program, is required to make recommendations concerning changes to the table. In 1999, GAO reported that HHS added seven injuries and removed three others from the table in 1995 and 1997, respectively, using findings from Institute of Medicine reviews conducted in 1991 and 1994—in conjunction with public policy considerations provided by the Advisory Commission on Childhood Vaccines, scientific issues raised by HHS’s National Vaccine Advisory Committee, and input from the public. Individuals who believe they or their child have been injured by or a death resulted from a vaccine covered by the program may file a petition making a claim with the USCFC. In general, to be eligible for compensation, a petition must be filed (1) for a vaccine-related injury within 3 years of the first symptom of the injury (or significant aggravation of an injury), or (2) for a death within 2 years of the death and within 4 years after the first symptom of the vaccine-related injury (or signification aggravation of an injury) from which the death resulted. HHS, as the respondent in the process, receives a copy of the petition, including medical records, and other documentation filed with the USCFC. HRSA sends a report of its medical review including HHS’s recommendation regarding the claim to DOJ. DOJ lawyers, representing HHS in the proceedings, review the HRSA report and the legal aspects of the claim and produce a report that outlines the government’s position as to why compensation should or should not be awarded, provides a summary and medical analysis of the petitioner’s claims, and asserts applicable legal arguments. After the claim is filed in USCFC, it is assigned to a special master, a judicial officer who examines the evidence and adjudicates the claim. The special master reviews the petition and may order the petitioner to provide additional records if they are missing or if they are insufficient.Additionally, petitioners and DOJ may file expert reports including additional medical evidence from scientific literature or studies. The special master then determines whether a claim should be compensated. For claims that are compensated, there are three adjudication categories: Concession. In a concession, HHS’s review of medical records, scientific literature, and other documents finds that the petitioner is entitled to compensation, because the evidence meets the criteria of the vaccine injury table or because it is more likely than not that the vaccine caused the injury. Negotiated settlement. In a negotiated settlement, the petition is resolved via negotiation between HHS (represented by DOJ) and the petitioner. Contested decision in favor of the petitioner. If HHS does not concede that a petition should be compensated or if both parties do not agree to settle, the special master issues a decision after weighing the evidence presented by both sides, which may involve conducting a hearing. If the petitioner is entitled to compensation as a result of a concession or contested decision in favor of the petitioner, the proceeding then moves to the damages phase, in which the amount of compensation is determined. In a negotiated settlement, the amount of compensation is included in the settlement presented to the special master. VICP may also pay for attorneys’ fees and costs deemed reasonable even for unsuccessful petitioners, and the amounts of these fees and costs may be part of a settlement between the parties or determined by the special masters. After the claim has been adjudicated within VICP, the petitioner may choose to file a suit in civil court. Even if found to be entitled to compensation, the petitioner may elect to reject the compensation awarded and file a suit in civil court. The Vaccine Injury Compensation Trust Fund, managed by Treasury, is funded by an excise tax imposed on each dose of vaccine sold in the United States that is routinely recommended for administration to children. Appropriations from the trust fund to HRSA pay compensation awarded under VICP for vaccine-related injury or death to the petitioner and may also pay for petitioner attorneys’ fees and costs. Appropriations from the trust fund to HRSA, DOJ, and USCFC (for the Office of Special Masters) also pay for administrative and other expenses associated with processing VICP claims. USCFC has managed large influxes of similar vaccine injury claims through omnibus proceedings or groupings of claims. According to the Office of Special Masters, many of the claims alleging that a particular vaccine caused the same injury will rely on similar evidence, so by using omnibus proceedings or groupings to examine evidence for similar claims, the courts can more efficiently review the evidence. In omnibus proceedings, petitioners select a lead claim in each category of injury, and develop these lead claims while the remaining petitioners choosing to participate in the omnibus elect for their remaining claims to be stayed, or put on hold, until the lead claim reaches a final disposition. HHS is required to include a statement of the availability of VICP in the vaccine information materials that health care providers are to distribute to the parent or legal representatives of a child or to any other individual to whom the provider intends to administer a covered vaccine. These materials—referred to as vaccine information statements by HHS—are intended to explain both the benefits and risks of a vaccine covered by VICP. HHS is also required to undertake reasonable efforts to inform the public of the availability of the program. Most of the VICP claims filed since fiscal year 1999 have taken multiple years to adjudicate, but those filed since fiscal year 2009 have taken less time. For many claims, the parties have concluded the proceeding through a negotiated settlement, rather than a contested decision adjudicated by a special master or the courts. Additionally, certain claims were addressed along with similar claims as part of an omnibus proceeding or informal grouping. VICP claims filed since fiscal year 1999 took an average of about 5 and a half years to adjudicate, according to USCFC data for the nearly 8,800 claims filed since fiscal year 1999 that were adjudicated as of March 31, 2014. There was wide variation in the amount of time to adjudicate these claims. The claim that took the shortest time to adjudicate was filed in fiscal year 1999 and took 2 days, and the claim that took the longest time to adjudicate was filed in fiscal year 1999 and took 5,276 days (more than 14 years). More than 1,000 (11 percent) of the claims filed since fiscal year 1999 were still in process (pending) as of March 31, 2014; most of these had been pending for 2 years or less (see fig. 1.) For claims filed since fiscal year 2009, a greater percentage of claims were resolved within 1 or 2 years. One possible reason is that the vast majority of claims alleging autism as the injury were filed prior to fiscal year 2009. Autism claims may have taken longer because they were part of an omnibus proceeding, which suspended activity on most autism claims for a period of time. According to USCFC data, for the more than 1,400 claims filed since fiscal year 2009 that were adjudicated as of March 31, 2014, the average amount of time to adjudicate a claim was 587 days (about 1.6 years). More than 900 (40 percent) of the claims filed since fiscal year 1999 were still pending, which could cause this average to increase over time as these pending claims are resolved (see fig. 2). Of the pending claims, nearly half had been pending for 1 year or less as of March 31, 2014. HHS has reported the program has met its annual target of 1,300 days (about 3.5 years) for the average time to adjudicate non-autism claims in all but 1 year since fiscal year 2009. This target (1,300 days) has been in place since fiscal year 2009, and applies to claims concluded in a given fiscal year, regardless of the year they were filed, excluding claims that alleged autism as the vaccine-related injury. HRSA has reported meeting this goal since fiscal year 2009, except in fiscal year 2012, when VICP claims concluded that year took an average of 1,309 days to complete. Organizations representing petitioners have criticized the program for taking a long time to resolve claims. Petitioners report that long processing times delay receiving compensation to pay medical bills and other expenses related to the alleged injury. A HRSA-contracted survey of petitioners whose claims were adjudicated (regardless of whether or not they received compensation under VICP), found that nearly two-thirds of the 103 respondents indicated they were somewhat or very dissatisfied with the length of the process. decision is not made on their claim within specific time periods but according to the Office of Special Masters, petitioners rarely exercise this option. Altarum Institute, Determining the Feasibility of Evaluating the National Vaccine Injury Compensation Program, Final Report, a report prepared for the Health Resources and Services Administration, June 15, 2009. Of 716 petitioners the researchers identified as meeting their inclusion criteria,107 responded to their survey and 103 responded to this question on the length of the process. The results of this study cannot be generalized to the population of all petitioners who completed the VICP process; instead, they reflect only the VICP petitioners who responded to the survey. According to HRSA, petitioners were included in the sample if they (1) had filed a claim that had been compensated or dismissed in fiscal years 2004-2008 and (2) were represented by an attorney. that the time petitioners spend gathering supporting documentation or evidence can add significantly to the amount of time required to process a claim. These delays may occur at multiple points in the claims process, from petitioners needing to gather sufficient documentation for the court to begin an initial review, to the court needing documentation to determine the amount of compensation that a successful petitioner will receive. According to HRSA, for claims adjudicated as of March 31, 2014, its medical review process averaged over 700 days for claims filed in fiscal year 2010. HRSA attributes the length of time for medical review primarily to time spent waiting for petitioners to submit requested documentation. During the medical review, HRSA may also consult with external experts, who require additional time to review the details of the case; HRSA’s data indicate that over 1,200 outside reviews were conducted from fiscal years 2009 to 2014. Additionally, when special masters are reviewing the claim, a party may request that the special master delay a decision until additional documentation is available.Special masters may also request additional information from petitioners—such as a specialist physician’s opinion. According to HRSA data for claims filed since 2006, most compensated claims were adjudicated through negotiated settlement rather than a concession or a contested decision. HRSA’s data indicated that about 80 percent of the more than 1,500 non-autism VICP claims filed since 2006 for which compensation was awarded were adjudicated through a negotiated settlement between the parties, compared to about 10 percent involving a contested decision in favor of the petitioner and about 10 percent conceded by HHS. According to HRSA, claims which HHS does not concede may be resolved via a negotiated settlement for several reasons, including a desire by both parties to resolve a case quickly and efficiently. According to the Office of Special Masters, a special master may recommend parties settle as an expeditious and efficient method of resolving certain claims. The Office of Special Masters created an omnibus proceeding in order to address thousands of autism cases systematically and efficiently. Beginning in 1999, parents began filing petitions for compensation under VICP alleging that autism or neurodevelopmental disorders similar to autism were caused by the measles-mumps-rubella vaccine or vaccines containing thimerosal, a mercury-containing preservative used in some vaccines, covered by the program, or both. In 2002, the Office of Special Masters held a series of meetings with an informal advisory committee, including attorneys who represented many potential petitioners and legal and medical representatives of HHS, to address the task of dealing with these claims. The Office of Special Masters decided to utilize a two-step procedure: first, looking into whether the vaccinations in question can cause autism and, if so, the circumstances under which this occurs, and second, applying the conclusions from the first step to the individual claims. The omnibus autism proceeding included test cases for two different theories by which vaccines were alleged to cause autism. Some petitioners withdrew from the omnibus proceeding and elected to proceed within the vaccine program on other theories of causation. Some petitioners withdrew from the program entirely, as was their statutory right, which enabled them to pursue claims against vaccine manufacturers in civil court. The influx of new VICP claims for autism continued in fiscal years 2002- 2005, while the number of new non-autism claims remained relatively stable (see fig. 3). HRSA data show that during this period, nearly 90 percent of VICP claims filed alleged autism as the vaccine-related injury. Ultimately, the special masters did not award compensation in any of the test cases, and most remaining omnibus autism proceeding claims were dismissed. However, according to the Office of Special Masters, some petitioners who had been part of the omnibus autism proceeding continued with claims separate from the omnibus proceeding. HHS has added vaccines to the vaccine injury table without adding covered injuries associated with those vaccines. Following their addition to the table, more claims were filed for off-table injuries. Since fiscal year 1999, HHS has added six vaccines to the vaccine injury table (but has not added covered injuries associated with these vaccines to the table). This means that while individuals may file VICP claims for those vaccines, each petitioner must demonstrate that the vaccine that was administered caused the alleged injury. In general, each of the six vaccines was added within 2 years of the Centers for Disease Control and Prevention’s recommending it for routine administration to children and having an excise tax imposed. Since 1999, two vaccines, both of which had covered injuries associated with them, were removed from the vaccine injury table. See appendix II for the vaccines and injuries added and removed from the vaccine injury table since 1999. At the end of the fiscal year 2014, 16 vaccines were covered by the program, 8 of which did not have associated covered injuries on the table. HHS has been considering adding injuries to the table in association with the eight vaccines that are listed without covered injuries. HRSA officials said they are working on a final rule to add an injury associated with one vaccine and a proposed rule that would add injuries associated with several other vaccines. HHS has also been considering adding injuries in association to covered vaccines that already have associated injuries on the vaccine injury table. vaccine.injury claims attributed to another vaccine that protected against rotavirus but had been removed from the table in fiscal year 2009. According to HRSA, the agency is working on a final rule to add this injury. In proposing the addition, HRSA considered reviews and HRSA officials said they are developing a proposed rule to add covered injuries for all seven of the remaining vaccines on the table without covered injuries. For example, HRSA is considering proposing to add Guillain-Barré Syndrome (a disorder in which the body’s immune system attacks part of the nervous system and is characterized by muscle weakness and paralysis), as an injury associated with the influenza vaccine. The agency is also considering proposing to add other injuries associated with the influenza, hemophilus influenza type b conjugate, varicella, pneumococcal conjugate, hepatitis A, meningococcal, and human papillomavirus vaccines. According to HRSA officials, the factors informing the table changes that they are considering proposing include (1) an Institute of Medicine study of certain vaccines; (2) HHS’s independent review of vaccine causation and medical and scientific evidence; (3) the need to clarify injury definitions on the table; and (4) recent studies related to injuries associated with the influenza vaccine. As of September 30, 2014, HRSA had not promulgated regulations to make these changes to the vaccine injury table. According to HRSA officials, the agency plans to publish the final rule to add the injury associated with the rotavirus vaccine to the table by July 2015, and to publish a proposed rule for the other injuries it is considering adding to the table by August 2015. According to HRSA officials, the process to publish such a proposed rule can take about 9 months to 1.5 years. The officials also said that the process for publishing such a final rule can take about 1.5 years to 2.5 years after the proposed rule is published. In its justification accompanying its fiscal year 2013 budget request, HRSA acknowledged that many stakeholders, including Congress, have voiced interest and concern over keeping the injury table in line with current science. At that time, HRSA reported that other VICP activities, including medical reviews and court deadlines, have taken priority over updating the table. While the injuries HRSA is considering have not yet been added to the table, HRSA and DOJ officials report that many claims alleging these injuries that HRSA is considering adding to the table have been conceded or settled. For example, according to DOJ officials, there have been numerous settlements for cases alleging Guillain-Barré Syndrome as an injury associated with the influenza vaccine. The addition of the six vaccines to the vaccine injury table without associated injuries has contributed to an increase in off-table claims. When we reported on this program in 1999, 2 of the 12 vaccines on the injury table were without associated injuries listed on the table. We reported that about one-quarter (28 percent) of claims filed as of February 1999 were for off-table injuries. In contrast, of the 3,007 claims filed since fiscal year 2005 (the year that trivalent influenza vaccine was added to the table) for which the covered vaccine associated with the alleged injury was specified, at least 59 percent were associated with one of five vaccines added to the table without associated table injuries, according to HRSA data.would not have the presumption of causation associated with an on-table claim. Overall, since 2009, more than 98 percent of the new claims filed alleged off-table injuries that required the petitioner to prove their injury was caused by the vaccine they received, according to the Office of Special Masters. To receive compensation, the petitioners in these claims Claims alleging injuries to adults also increased as a result of the addition of vaccines that are recommended for administration in adults (as well as children) to the vaccine injury table. Several of the vaccines added to the injury table—in particular, the vaccine to prevent influenza—are recommended for routine administration to adults as well as children. As a result, although the vaccines were added to the table because they were recommended for children, adults who are vaccinated with them are also eligible for compensation under VICP. More than half (51 percent) of the 4,402 VICP claims filed since fiscal year 1999 (for which the covered vaccine associated with the alleged injury was specified) were for injuries to adults and 1,287 (29 percent) were for adults alleging injuries in association with influenza vaccine, according to HRSA data. The Vaccine Injury Compensation Trust Fund balance increased to more than $3 billion in fiscal year 2013 despite increased spending by HRSA, DOJ, and USCFC on petitioner compensation, attorneys’ fees and costs, and other VICP-related expenses. The balance in the trust fund increased from $2.9 billion at the end of fiscal year 2009 to nearly $3.3 billion at the end of fiscal year 2013. The balance increased because the trust fund’s income outpaced its disbursements to HRSA, DOJ, and USCFC, although disbursements also increased during this period (see fig. 4). Treasury reported over $200 million in net revenues from the vaccine excise tax in each of fiscal years 2009-2013. As required by applicable law pertaining to the management of trust funds, Treasury oversees the investment of part of the net revenue from vaccine excise taxes. Interest from these investments ranged from about $49 million in fiscal year 2012 to about $126 million in fiscal year 2011. Total compensation to petitioners and the number of claims compensated have both increased since fiscal year 1999. Petitioners’ compensation paid by HRSA using appropriations from the trust fund increased to over $254 million in fiscal year 2013. With the exception of fiscal year 2000, the total amount spent on compensation awarded to petitioners remained under $125 million between fiscal years 1999 and 2009. The total amount spent on compensation to petitioners increased to nearly $180 million in fiscal year 2010 and to more than $250 million in fiscal year 2013 (see fig. 5 and app. III). According to the Office of Special Masters, the increase in the total amount paid to petitioners in compensation and number of compensated claims is related to the addition of the influenza vaccine to the vaccine injury table. The influenza vaccine, which is administered to millions of people each year, was added to the injury table in fiscal year 2005. The annual amount the VICP program paid for attorneys’ fees and costs remained relatively steady from fiscal year 1999 to fiscal year 2007, but started to increase in fiscal year 2008, consistent with an increase in the total number of payments to attorneys (see fig. 6 and app. III). In order to help ensure access to the program, VICP may pay for reasonable attorney fees and costs upon a determination that the petition was brought in good faith and there was a reasonable basis for the claim for which the petition was brought, regardless of whether the petitioner’s claim is compensated or dismissed. The majority of VICP payments for attorneys’ fees and costs have been for compensated or dismissed claims; however, since fiscal year 2008, VICP has also paid some interim attorneys’ fees and costs for selected ongoing claims at the special master’s discretion. Compensation for attorneys’ fees and costs must be reasonable such that it generally reflects the actual time and expense devoted to the case. The total amount obligated by HRSA, DOJ, and USCFC (for the Office of Special Masters) to pay for staff and other expenses related to processing VICP claims increased from $17 million in fiscal year 2009 to about $19 million in fiscal year 2013. The three departments obligated a total of about $91 million for expenses associated with processing VICP claims during those 5 fiscal years. About two-thirds of VICP-related expenses ($61 million) was obligated to pay the salaries and benefits of full-time equivalent (FTE) agency staff and about one-third ($30 million) was obligated for other VICP-related expenses (see table 1). The amounts obligated for FTE salaries and benefits were for staff of HRSA, DOJ, and USCFC’s Office of Special Masters who process the claims and represent the government’s interest in legal proceedings. For example, USCFC’s Office of Special Masters paid the salaries and benefits for special masters and clerks. The average number of full-time equivalent (FTE) staff supported across the three agencies was 81 per fiscal year. Each agency also had other VICP-related spending reimbursed by the trust fund. For example, HRSA reported obligating about $9.6 million for medical experts to review petitioner claims and provide expert testimony during adjudication proceedings for fiscal years 2009 through 2013. HRSA also obligated funds to support the Advisory Commission on Childhood Vaccines, including compensation and travel expenses for commission members. The departments reported obligating funds for costs for travel, processing documents, maintaining records, rent, supplies, and equipment. For example, obligations include rental payments to the General Services Administration by DOJ and the cost of court reporters funded by USCFC’s Office of Special Masters. Information on petitioners’ experience with VICP is limited. HRSA has taken some steps to undertake outreach activities, but the agency has not yet assessed the effect of these efforts. Other than a study for HRSA on petitioners’ satisfaction with VICP, the agency officials and stakeholders we interviewed and the documents we reviewed did not identify any data or studies regarding the experience of individuals who have filed VICP claims. The study prepared for HRSA, dated 2009, reported the responses from 107 petitioners whose claims were compensated or dismissed in fiscal years 2004-2008. Because this was a voluntary survey with a low response rate, its results cannot be generalized to all petitioners who completed the VICP process; instead, it reflects only the experience of the VICP petitioners who responded to the survey. We also obtained comments from stakeholders, including officials from organizations representing providers, petitioners’ attorneys, and parents. These stakeholder comments, while providing insight into petitioner experiences, are anecdotal and do not represent the experience of all petitioners who have filed VICP claims. Members of the Advisory Commission on Childhood Vaccines we interviewed expressed interest in obtaining additional information on petitioner’s experience with VICP; however, they told us they had not done so, citing concerns about confidentiality and other issues. The limited comments on petitioners’ experience from those who responded to the survey prepared for HRSA and from stakeholders included the following: Some petitioners responding to the HRSA survey reported being dissatisfied with the claims process and some commented that the process places too great a burden on petitioners and family members, with requests for additional information after the claims were filed. Similarly, one stakeholder said that petitioners view the vaccine injury claims process as confusing, time-consuming, too lengthy, and traumatic. Another stakeholder, on the other hand, commented that while vaccine-related injuries do not happen often, the program handles them efficiently and fairly when they do happen. Other comments from petitioners responding to the HRSA survey and stakeholders were related to the payment process and amount of compensation. More than half of the 61 petitioners who responded to a question on the method of payment in the HRSA survey reported being somewhat or very satisfied with the method of award payment; however, 14 respondents suggested more timely and flexible payment mechanisms. About half of the 63 respondents to the question on the amount of compensation reported the award amount was inadequate to cover past and future medical care. Similarly, stakeholders we interviewed reported concerns with the amounts petitioners receive, the method of payment, and a perceived lack of transparency regarding how the money from the Vaccine Injury Compensation Trust Fund has been spent. For example, one stakeholder told us that petitioners felt forced to settle for less than what it will cost them to care for their children or themselves for their lifetimes, and another stakeholder raised concerns about the choice of annuities for petitioners. Other comments on the program from stakeholders were related to the perception of an adversarial or unfriendly environment throughout the process, the use of settlements and perceived pressure to settle claims, the use of omnibus proceedings to group claims together, and concerns about confidentiality of the medical information filed by the petitioner. Some petitioners responding to the HRSA survey and some stakeholders also reported difficulties finding an attorney to represent petitioners in the process; however, petitioners responding to the survey were split on this issue—about the same number reported that finding an attorney was difficult as reported that finding an attorney was easy. HRSA has acknowledged being criticized for years for not adequately promoting public awareness of VICP, and has recently taken some steps, such as developing and starting to implement an outreach plan for fiscal year 2014 and developing an outreach plan for fiscal year 2015, to improve its efforts to reach out to providers and the public. In its 2006 VICP strategic plan, HRSA noted that one of the critical issues facing the program from 2005 to 2010 was that many parents, the general public, attorneys, and health care professionals were not aware VICP existed. In 2009, HRSA contracted for the development of a comprehensive national marketing and outreach communication plan; the contactor presented the plan to HRSA in November 2010. According to HRSA, the agency used this plan to guide outreach efforts. Prior to the fiscal year 2014 plan, HRSA also reported exhibiting at professional conferences; updating the VICP website and the VICP booklet that is available from the website (including translating the booklet into Spanish); facilitating the review of vaccine information statements (which include a statement on VICP) by the Advisory Commission on Childhood Vaccines; and responding to media inquiries and inquiries received via e-mail, letters, and the program’s toll-free number. HRSA officials also noted the need to carefully balance messages that increase awareness of VICP with public health messages that encourage and promote immunizations. HRSA shared an overview of its outreach plans with its Advisory Commission on Childhood Vaccines in September 2014. HRSA reported that many of the activities in the agency’s 2014 outreach plan were in process at the end of the fiscal year. These activities included reviewing the VICP booklets to use plain language and make them more user- friendly, reviewing and upgrading the VICP website to improve navigation, developing VICP message points for target audiences and slides about the program to be used in speeches and other presentations by HRSA staff, and requesting federal websites to provide information on the program and to link to the VICP website. In its outreach plan for fiscal year 2015, the agency is targeting health care providers, parents and expectant parents, adults aged 50 years and older (including Spanish- speaking older adults), and civil litigation and health attorneys, with the goal of informing target audiences of the availability of the program. According to HRSA, these target audiences were selected because they include individuals who administer vaccines and individuals (or their caregivers) who receive vaccinations. HRSA has identified a number of measures to assist in tracking performance of its fiscal year 2015 plan, including website metrics, the number of “retweets” and “shares” from social media initiatives, the number of media inquiries, and the number of attendees or participants at outreach events. Because the agency has not completed many of its planned efforts to improve how it informs the public of the availability of the program, it is too early to determine the effect of HRSA’s current and planned outreach efforts. Without awareness of the program, individuals who might otherwise receive compensation for a vaccine-related injury or death could be denied compensation because of a failure to file their claim within the statutory deadlines. One stakeholder commented that the public is largely unaware of the program, and this lack of awareness contributes to missing filing deadlines and individuals being denied the opportunity for compensation. Members of the Advisory Commission on Childhood Vaccines also told us that many individuals may not know there is a statute of limitations on filing a claim and many miss the opportunity to file a claim because of the statute of limitations. In December 2013, the commission recommended extending the statute of limitations for vaccine-related injuries and deaths. Extending the statute of limitations would require amending the applicable statutory provision. It has been more than 25 years since VICP went into effect in 1988. In that time, the program has awarded more than $2.8 billion to thousands of petitioners. Several aspects of the program have changed over the years. First, while claims alleging injuries on the vaccine injury table made up the majority of claims filed in the first decade of the program, today— after the addition of new vaccines, particularly influenza vaccine, to the table without associated injuries—the majority of the claims filed involve off-table injuries. Stakeholders report that the program has an adversarial environment, as petitioners are required to demonstrate a covered vaccine caused the injury on their VICP claim when there are no associated injuries on the table. The extent to which this will change if HHS updates the vaccine injury table to include more injuries, as expected, is yet to be seen. Second, most of the compensated cases are now adjudicated through negotiated settlement, rather than contested decisions before the special master. And while the vaccines covered by the program are included because they are recommended for children, many of the program’s petitioners in recent years are adults who received covered vaccines. Addressing one criticism of the program by stakeholders—specifically the need to increase the statute of limitations—would require a statutory change in the program. Regardless of whether the statute of limitations is increased, HHS’s efforts to increase awareness of the availability of the program will be important to help ensure that potential petitioners are aware of the program and can file claims in time. While HHS has recently taken or planned steps to improve its outreach activities, what effect, if any, these efforts will have remains to be seen. As the agency moves forward, it will be important for HRSA to identify which activities are reaching its target audiences. We provided a draft of this report to HHS, DOJ, USCFC, and Treasury. HHS and USCFC agreed with our findings and provided written comments, which are reprinted in appendixes IV and V, respectively. In its comments, HHS emphasized that it administers VICP jointly with DOJ and USCFC, with HHS responsible for reviewing petitioners’ claims, providing recommendations for entitlement to compensation, and making payments to petitioners and attorneys. In commenting on our identification of its efforts on VICP outreach, HHS noted that it is strengthening its outreach efforts by implementing its fiscal year 2015 VICP outreach plan, which increases outreach to target populations and includes performance measures. In its written comments, USCFC noted that the Administrative Office of the United States Courts and USCFC both supply administrative support to the Office of Special Masters and VICP without reimbursement from the trust fund. The court also commented that, while considerable strides have been made in reducing the average processing time for claims in recent years, the climbing and changing nature of the caseload, coupled with the statutory cap on the number of special masters, present a continuing challenge to the Office of Special Masters’ ability to continue to reduce average processing times. USCFC also commented that following omnibus proceedings, claims have been resolved more expeditiously in recent years, and often through settlements. USCFC also commented that the Office of Special Masters strives to resolve all cases fairly and expeditiously. HHS, USCFC, DOJ, and Treasury also provided technical comments that were incorporated, as appropriate. We are sending copies of this report to the Secretary of Health and Human Services, the Attorney General of the United States, the Chief Judge of the United States Court of Federal Claims, and the Secretary of the Treasury. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Vaccine Vaccines against tetanus (e.g., DTaP, DTP, DT, Td, or TT) Vaccines against pertussis (e.g., DTP, DTaP, P, DTP-Hib) Anaphylaxis or anaphylactic shock Encephalopathy (or encephalitis) Vaccines against measles, mumps, rubella in any combination (e.g., MMR, MR, M, R) Encephalopathy (or encephalitis) Vaccines against measles (e.g., MMR, MR, M) Vaccines against rubella (e.g., MMR, MR, R) Vaccines against polio (polio live virus-containing (OPV)) Vaccines against polio (polio inactivated virus-containing (IPV)) Vaccines against hemophilus influenzae type b (Hib conjugate vaccine) Vaccines against varicella Vaccines against rotavirus Vaccines against pneumococcal disease (pneumococcal conjugate vaccine) Vaccines against influenza (trivalent vaccine) Vaccines against meningococcal disease Vaccines against human papillomavirus (HPV) This appendix shows the total amount paid in compensation to petitioners under the National Vaccine Injury Compensation Program and the number of compensated claims in fiscal years 1999-2013 (see table 2). It also shows the amounts the program paid in attorneys’ fees and costs for those same fiscal years (see table 3). In addition to the contact named above, Kim Yamane, Assistant Director; George Bogart; Carolyn Garvey; Cathleen Hamann; Katherine Perry; Fatima Sharif; and Eric Wedum made key contributions to this report.
Vaccines save lives by preventing disease in the people who receive them. In some instances, however, a vaccine can have severe side effects, including death or an injury requiring lifetime medical care. VICP provides compensation to people for injuries and deaths associated with certain vaccines for medical and other costs. The program includes an injury table that lists the injuries that are presumed to be caused by vaccines covered by the program. The program may also compensate individuals for injuries not on the table; however, in those cases causation is not presumed. In both cases, medical and other records are required. VICP pays claims from a trust fund. Since the program began in 1988, it has awarded more than $2.8 billion in compensation. GAO was asked to review the program. GAO examined (1) how long it has taken to adjudicate claims and how claims have been adjudicated, (2) the changes to the vaccine injury table, and (3) how the balance of and spending from the Vaccine Injury Compensation Trust Fund have changed, among other objectives. GAO examined data and interviewed officials from HHS, DOJ, and USCFC, including data on claims filed since fiscal year 1999 and their status as of March 31, 2014; reviewed laws and agency documents; and reviewed Treasury data and agency data on compensation and obligations for other VICP-related expenses for fiscal years 2009 through 2013. HHS and USCFC agreed with GAO's findings, and HHS, USCFC, DOJ, and Treasury provided technical comments that were incorporated as appropriate. Most of more than 9,800 claims filed with the National Vaccine Injury Compensation Program (VICP) since fiscal year 1999 have taken multiple years to adjudicate (see fig.). More than 1,000 (11 percent) of claims filed since fiscal year 1999 were still in process (pending) as of March 31, 2014; most of these were pending for 2 years or less. A greater percentage of the claims filed since fiscal year 2009 were resolved within 1 or 2 years. In all but 1 year since fiscal year 2009, the program has met the target for the average time to adjudicate claims (about 3.5 years) tracked by the Department of Health and Human Services (HHS), which administers the program. Officials from the U.S. Court of Federal Claims (USCFC), where VICP claims are adjudicated, report that delays may occur while petitioners gather evidence for their claims. Since 2006, about 80 percent of compensated claims have been resolved through a negotiated settlement. Since fiscal year 1999, HHS has added six vaccines to the vaccine injury table, but it has not added covered injuries associated with these vaccines to the table. This means that while individuals may file VICP claims for these vaccines, each petitioner must demonstrate that the vaccine that was administered caused the alleged injury. HHS is considering adding covered injuries associated with these vaccines; but as of September 2014, it had not published any final rules to do so. The balance of the Vaccine Injury Compensation Trust Fund, managed by the Department of the Treasury (Treasury) increased from $2.9 billion in fiscal year 2009 to nearly $3.3 billion at the end of fiscal year 2013 as the trust fund's income (from net revenues from vaccine excise taxes and interest on investments) outpaced its disbursements to HHS, USCFC, and the Department of Justice (DOJ), which represents HHS in VICP proceedings. VICP compensation, funded by the trust fund, increased from less than $126 million in each of fiscal years 1999 to 2009 to over $254 million in fiscal year 2013.
In a major space policy address on January 14, 2004, President George W. Bush announced his “Vision for U.S. Space Exploration” (Vision) and directed NASA to retire the SSP after completing construction of the International Space Station in 2010 and focus its future human space exploration activities on a return to the Moon as a prelude to future human missions to Mars and beyond. As part of the Vision, NASA is developing new vehicles under the Constellation program, with an initial operational capability currently scheduled for 2015. In order to accelerate development and minimize development costs, NASA elected to pursue shuttle-derived options and the use of heritage systems for the new systems. The agency also found that shuttle-derived options are more affordable, safe, and reliable and provide the use, in some instances, of existing personnel, infrastructure, manufacturing and processing facilities, transportation elements, and heritage system hardware. NASA will transfer whatever real (land, buildings, and facilities) and personal (e.g., system hardware, tools, plant equipment) property is practical given program requirements to Constellation and other programs to offset the need for new acquisitions. Transferring property provides NASA with distinct financial benefits. First, the receiving program avoids the cost of acquiring the needed property. Second, the SSP avoids the costs associated with disposing of the property. The SSP and its contractors are currently conducting transition property assessments (TPA) of property belonging to the SSP in two phases. In Phase 1 the agency is identifying existing SSP personal property and determining property category, item status, availability date, and whether the property should be transferred or declared excess. This process is ongoing and continues to identify SSP property. For example, as of May 2007 NASA had identified about 1 million line items of SSP personal property. By May 2008 this number had increased to about 1.2 million line items. During Phase 2 of the TPA process, the agency is developing detailed plans for disposing of excess SSP personal property including identifying hazardous materials, cataloging recoverable precious metals, providing historical artifact justification, and identifying final destinations. Based on discussions between the Space Shuttle and Constellation programs, it is estimated that about 40 to 47 percent of SSP personal property will be transferred to Constellation and other NASA programs and the remaining property will be declared excess. NASA has yet to include an official estimate of the total cost of the SSP transition and retirement in any of its budget requests. Although NASA has developed a series of cost estimates for it—ranging from the $4.4 billion reported by the NASA Inspector General in January 2007, to the approximately $1.8 billion estimate prepared by the agency to support its fiscal year 2009 budget request—and has been funding SSP transition and retirement activities out of its approved Shuttle budget, the agency has not included any of the estimates for fiscal years 2011 and beyond in its budget requests to the Congress. NASA transition managers maintain that NASA made a strategic decision not to release an estimate of transition and retirement costs for fiscal years 2011 and later until the scope of the effort and the associated costs and schedule were better defined. According to NASA transition managers, the agency has now accomplished this better definition and plans to include an estimate substantially lower than the earlier $1.8 billion estimate for the total cost of SSP transition and retirement in its fiscal year 2010 budget request. We have previously reported on NASA’s management of the billions of dollars of government equipment under its control. We found weaknesses in the design and operation of NASA’s systems, processes, and policies covering the control and accountability of equipment. We made recommendations that focused on ways to strengthen NASA’s internal control environment and improve its property management. NASA faces disparate challenges defining the scope and costs of SSP transition and retirement activities. The Constellation program is finalizing the requirements that will inform the SSP of what real and personal property needs to be retained and what should be declared excess. Furthermore, the property assessments needed to inform NASA’s budget planning process may not be completed in time to support the cost estimates the agency plans to include in its fiscal year 2010 budget request. In addition, NASA faces other challenges that further hamper the agency’s efforts to develop firm estimates of SSP transition and retirement scope and cost, including finalizing plans for safing artifacts. The Constellation program is still finalizing its requirements and defining needed capabilities; therefore, the agency does not know, in all instances, what SSP property needs to be retained and what property can be declared excess. According to SSP and Constellation transition managers, there is a symbiotic push and pull relationship between the SSP and the Constellation program. Ideally, in this relationship the SSP would push the property that it no longer needs for the safe operation of the space shuttle to the Constellation program in order to avoid property maintenance and/or disposal costs. Likewise, in order to avoid acquisition costs, the Constellation program would pull the SSP property that it has designated as needed. Thus far, however, this relationship has not worked as well as the agency has desired. Essentially, the Constellation program has not finalized its requirements for personal and real property from the SSP because it is still in the process of defining its own programmatic requirements. For example, at the time of our review, the Constellation program had yet to hold its Preliminary Design Review during which it will finalize its preliminary design and operations concepts, two key steps in determining the Constellation program’s hardware and processing facility needs. Further, according to the Exploration Systems Mission Directorate (ESMD) Transition Manager, the Constellation program is just now realizing the importance of “pulling” SSP assets to avoid costs, as any and all of the SSP transition and retirement costs post-2010 will be borne by NASA at the expense of the follow-on program(s) or Center Management and Operations budgets. NASA will not complete the last phase of the TPA process until after the agency’s budget request for fiscal year 2010 is submitted and may not complete all TPA activity until the end of fiscal year 2009. NASA plans to complete Phase 1 of the TPA process by the end of September 2008. However, delays in finalizing system designs within the Constellation program are hampering the efforts of the Solid Rocket Booster element of the SSP to complete Phase 1 on time. In effect, Phase 2 of the TPA process will develop the type of detailed information needed to support accurate SSP transition and retirement cost estimates. The TPA Phase 2 process, however, is not scheduled for completion until January 2009—well after the fiscal year 2010 budget request is formulated. Furthermore, the Space Shuttle Main Engine does not anticipate completing TPA Phase 2 until August 2009, well after the budget request is submitted to Congress. NASA faces other challenges that further hamper the agency’s efforts to define the scope and cost of SSP transition and retirement. In addition to the issues discussed above, NASA has not yet developed final plans and/or cost estimates for safing artifacts, including the orbiters Atlantis, Discovery, and Endeavour. Moreover, the SSP lacks a centralized information system to track and control all SSP property. A centralized system would be particularly useful as transition and retirement activities are expected to rapidly increase in 2010. These and other challenges are summarized in table 1 below. All of these challenges further hamper NASA’s efforts to develop firm estimates of SSP transition and retirement scope and cost. Lastly, at the time NASA will be experiencing an increase in transition activity, it will simultaneously be finalizing the designs for the Ares I and Orion vehicles and conducting potentially the most complicated sequence of shuttle flights ever attempted—completing the International Space Station and conducting a fifth servicing mission to the Hubble Space Telescope all by the end of 2010. These activities will likely create additional challenges for the transition efforts, as they may require more attention than anticipated from the workforce as well as more resources should unexpected problems occur. SSP transition and retirement costs are not transparent in NASA’s current budget request and are not expected to be fully reflected in its 2010 request. This is partly due to challenges and delays in finalizing cost estimates as described above as well as where costs are being reflected in the budget. Specifically, in laying out its plans for SSP transition and retirement, NASA elected to capture transition and retirement costs within its existing budget structure rather than display them separately. Consequently, the costs of the SSP transition and retirement are dispersed throughout NASA’s budget request. SSP’s direct transition and retirement costs are included in the SSP budget line. The Cross-Agency Support portion of NASA’s budget, however, includes funding for significant SSP transition and retirement activities that NASA considers indirect costs, including environmental compliance and remediation and demolition of excess facilities. These funds, however, are not identified as SSP transition and retirement costs and it is not easy to discern that they are transition related. Furthermore, NASA plans to offset some transition costs by utilizing an exchange/sale authority that allows federal agencies to exchange or sell non-excess, non-surplus personal property and apply the proceeds toward acquiring similar replacement property. The SSP budget request for fiscal year 2009 identifies a funding need of about $370 million for transition and retirement through fiscal year 2010 to pay for activities within each of the SSP’s three major projects—flight and ground operations, flight hardware, and program integration (see table 2). NASA will use these funds to cover the costs of the prime contractor, SSP personnel, and support contractors working on transition and retirement activities. These activities, however, do not represent the full scope of SSP transition and retirement. The Cross-Agency Support appropriation account within NASA’s budget includes or will include funding for significant SSP transition and retirement activities. This appropriation account—which is not aligned with a specific program or project—includes what are essentially NASA’s administrative or overhead costs for all of its centers and activities. Cross- Agency Support will include SSP transition and retirement funding within its Environmental Compliance and Restoration, Center Management and Operation, and Strategic Institutional Investments budget lines. These funds, however, are not identified as SSP transition and retirement costs. As such, it is difficult to discern the full costs of the transition and retirement effort. Funding for SSP-related environmental clean-up is included under NASA’s Environmental Compliance and Restoration program. NASA currently estimates that the SSP has contributed to environmental contamination at 94 of 163 sites and that the agency’s total environmental clean-up liability is about $1 billion over several decades. Agency officials maintain that they are unable to separate the cost of SSP environmental clean-up from the total estimate of the agency’s environmental liability because, in most instances, NASA is unable to differentiate between clean-up associated with the SSP and clean-up associated with legacy programs such as Apollo. Agency officials indicate that historically NASA has spent about $51 million annually on environmental compliance. The CM&O budget request is part of the Cross-Agency Support appropriation account that funds the maintenance of facilities. Funding for the property disposal offices at the centers that will physically dispose of excess SSP personal property and funding to maintain real property facilities is included in the CM&O line. The SSP is supposed to provide funding to these offices for any level of SSP property disposal above their normal level of activity, e.g., 62,994 pieces of property in fiscal year 2007. The sheer volume of SSP disposal activity, hundreds of thousands of line items of personal property, however, will likely consume the near full attention of these offices during the time frame of the SSP transition and retirement. Consequently, the baseline CM&O funding will be applied primarily to SSP transition and retirement activities. In terms of facilities maintenance, centers and programs maintain a tenant/landlord-like relationship wherein programs such as the SSP and Constellation in effect pay the centers for the use of facilities, except where specific facilities are entirely program funded. Any facility maintenance costs beyond those covered by the lease-type arrangements between the centers and the programs are offset by the CM&O budget. NASA plans to eventually demolish excess SSP facilities. According to agency officials, NASA plans to fund the demolition of excess real property within the Strategic Institutional Investments line within the Cross-Agency Support appropriation account at a level of about $15 million annually. The officials noted that because NASA has already scheduled demolition activities through 2015, NASA would probably not request funds to demolish excess SSP facilities until after fiscal year 2015. However, facility demolition post retirement of the Space Shuttle will occur when needed. NASA plans to offset some transition costs by utilizing an exchange/sale authority that enables federal agencies to exchange or sell non-excess, non-surplus personal property and apply the proceeds toward acquiring similar replacement property. This authority may also enable agencies to reduce certain costs, such as storage and administrative costs associated with holding the property and processing it through the normal disposal process. NASA intends to use proceeds from the exchange/sale of SSP personal property to offset the cost of acquiring replacement Constellation hardware. According to NASA,it will use the General Services Administration (GSA) to conduct Federal Asset Sales of all personal property located at or near NASA Centers and the Defense Contract Management Agency for the disposition of personal property located at vendor facilities.According to agency officials, however, NASA has not prepared an estimate of anticipated exchange/sale revenue. The officials indicated that proceeds received from the exchange/sale of SSP property will be transferred initially to an existing budget clearing account, wherein the agency will move the funds to a lower-level direct budget work breakdown structure element to supplement the acquisition of replacement property by the Constellation Program. SSP transition and retirement is an immense undertaking involving numerous actors across government and the aerospace industry. NASA faces great challenges in completing all planned efforts between now and the end of 2010. Effective implementation of these efforts requires careful planning to safely complete the ISS and repair the Hubble Space Telescope while expeditiously freeing SSP funding and facilities for the Constellation program. NASA is still in the process of developing the required plans. Incomplete planning, however, does not preclude the agency from providing the Congress with a more informed basis for decision making. Indeed, NASA’s strategic decision to delay submission of an estimate until planning is near complete has placed the Congress at a knowledge deficit relative to available information when considering the agency’s total funding needs. In the current budget environment, in which needs outpace available funding, it is imperative that NASA provide the Congress with the best information available, even if that information is incomplete or subject to change. To provide congressional decision makers with a more transparent assessment of funding needs for the SSP’s property transition and retirement activities, we are recommending that the NASA Administrator direct the Space Operations Mission Directorate to include in NASA’s fiscal year 2010 and future budget requests the agency’s best estimates of the total direct and indirect costs associated with transition and retirement of space shuttle property, including estimates of potential exchange/sale revenue. These estimates should include but not be limited to those costs borne directly by the SSP; those funds requested under Cross-Agency Support that will be used to support property transition and retirement activities, such as funds requested to demolish excess facilities and buildings, and funds requested for environmental compliance and remediation, and; the potential proceeds from exchange/sales of excess space shuttle property. NASA’s fiscal year 2010 and future budget requests should also identify all required transition and retirement activities which NASA has identified but not yet included in cost estimates and report NASA’s progress in completing SSP transition and retirement activities. In written comments on a draft of this report (see app. II), NASA concurred with our recommendation. NASA acknowledged that, thus far, it had not included estimates of the full scope and cost of Space Shuttle transition and retirement costs in any of its budget requests and that the agency is still in the process of finalizing the scope and cost of Space Shuttle transition and retirement activities. NASA stated that it expects estimates for fiscal year 2011 and beyond to be sufficiently mature to include in the President’s budget proposal for 2010 and that the agency intends, subject to approval by the Office of Management and Budget, to include the estimates in its fiscal year 2010 budget request. NASA also acknowledged the need to provide the Congress estimates of anticipated revenue from exchange sales but noted that, even taking into account the large amount of personal property to be disposed after fiscal year 2010, the agency does not expect large amounts of revenue from exchange sales. NASA also stated that the SSP was only one contributing factor to specific sites requiring environmental remediation and that NASA consolidates its budget for environmental remediation. This report recognizes that legacy programs contributed to environmental contamination and that NASA has a consolidated budget for environmental remediation apart from individual programs. Nevertheless, environmental remediation of SSP sites represents a substantial portion of the total costs associated with transition and retirement of the SSP. Separately, NASA provided technical comments which have been addressed in the report as appropriate. We are sending copies of the report to NASA’s Administrator and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-4841 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix III. To assess National Aeronautics and Space Administration’s (NASA) challenges in transitioning and retiring the Space Shuttle Program’s (SSP) assets and facilities and to determine if the cost of these efforts is transparent in NASA’s budget requests, we obtained and reviewed NASA documents including the Human Space Flight Transition Plan, Space Shuttle Program Transition Management Plan, Space Shuttle Program Transition and Retirement Requirements, and the Space Shuttle Program Risk Management Plan. We also examined NASA’s contract documentation, budget requests, and NASA’s 2009 Planning, Programming, Budgeting and Execution Guidance. We physically inspected property and interviewed and received detailed briefings from NASA and contractor transition management officials at NASA Headquarters in Washington, D.C.; the Kennedy Space Center in Orlando, Florida; the Johnson Space Center in Houston, Texas; the Marshall Space Flight Center in Huntsville, Alabama; the Stennis Space Center in Mississippi; and the Michoud Assembly Facility in New Orleans, Louisiana. We also attended NASA’s Transition Quarterly Program Manager’s Review at the Stennis Space Center. We discussed government property disposal policies and practices with General Services Administration officials in Washington, D.C., and Defense Contract Management Agency officials at the Kennedy Space Center. In addition, we held discussions with Congressional Research Service staff members on their prior and ongoing work related to NASA’s transition effort. Furthermore, we met with NASA’s Office of Inspector General to discuss its report on the Space Shuttle Program’s transition and retirement and reviewed previous GAO testimonies and reports related to NASA’s transition effort. We conducted this performance audit from February 2008 to August 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Jim Morrison, Assistant Director; William C. Allbritton; Helena Brink; Greg Campbell; Sylvia Schatz; John S. Warren; and Alyssa Weir made key contributions to this report.
The Space Shuttle Program (SSP) is scheduled to retire in 2010, and the transition and retirement of its facilities and assets will be an immense undertaking involving approximately 654 facilities worth an estimated $5.7 billion and equipment with an estimated value of more than $12 billion. NASA plans to retire the SSP in 2010 to make resources available for the Constellation program, which is producing the next generation of space vehicles by 2015. Many of the SSP's resources are expected to transition to Constellation while others will be dispositioned or preserved for their historic value. The Consolidated Appropriations Act, 2008 directed GAO to assess NASA's plans and progress in transitioning and retiring the SSP's facilities and equipment. More specifically, GAO examined 1) the challenges NASA faces in defining the scope and costs of transition and retirement activities, and 2) whether the cost of these efforts is transparent in NASA's budget requests. To address these objectives, GAO analyzed SSP plans, budget guidance, and other documents, and interviewed relevant government officials and contractors. The National Aeronautics and Space Administration (NASA) faces disparate challenges defining the scope and cost of SSP transition and retirement activities. For example, because the Constellation program is still finalizing its requirements, the agency does not yet know what SSP property it needs to retain or the full cost of the transition effort. In addition, NASA faces other challenges that hamper its efforts to manage the transition and develop firm estimates of SSP transition and retirement scope and costs. For example, NASA has not developed final plans and/or cost estimates for making artifacts-- including the orbiters Atlantis, Discovery, and Endeavour--safe for public display. The total cost of SSP transition and retirement is not transparent in NASA's current budget request and is not expected to be reflected in its fiscal year 2010 budget request. This is due in part to delays in estimating costs, but also to where costs are being reflected. For example, although SSP's direct transition and retirement costs are identified in the SSP budget line, indirect costs related to environmental clean-up and restoration, maintenance of required real property facilities during the gap in human spaceflight, and demolition of excess facilities are not. In addition, NASA plans to offset some transition costs by utilizing an "exchange/sale" authority that allows executive agencies to exchange or sell non-excess, non-surplus personal property and apply the proceeds toward acquiring similar replacement property.
In September and October 2001, at least seven envelopes containing significant quantities of B. anthracis spores were mailed through the U.S. postal system to two senators at their congressional offices in the District of Columbia and to media organizations in New York City and Boca Raton, Florida. According to the FBI, the evidence supports the conclusion that the mail attacks occurred on two separate occasions. The two letters of the first attack were postmarked September 18, 2001, and sent to NBC News and the New York Post, both in New York City. Three weeks later, two letters postmarked October 9, 2001, were mailed to two senators—Thomas Daschle and Patrick Leahy—at their Washington, D.C., offices. Other letters were sent to ABC, CBS, and American Media, Inc. Hard evidence of the attacks surfaced on October 3, 2001, when Robert Stevens, an American Media Inc. employee who worked in Boca Raton, Florida, was diagnosed as having contracted inhalational anthrax, from which he later died. However, because a contaminated envelope or package was not recovered in Florida, the agencies could not initially establish how the B. anthracis spores were delivered. According to the Postal Service, the combination of the Florida incident and the opening of the letter to Senator Tom Daschle on October 15 established the link to the U.S. mail system. At least 22 victims contracted anthrax as a result of the mailings. Eleven individuals developed inhalational anthrax, and another 11 developed cutaneous infections. Five of the inhalational anthrax victims died from their infections. The attack highlighted the need for enhanced capabilities for full forensic exploitation and interpretation of microbial evidence from acts of bioterrorism. Ideally, forensic evidence obtained in an investigation is sufficient to support conclusions about the culpability of a group, an individual, or the source of material used in such an act. Forensic evidence is used to support conclusions by classifying evidence into one of several categories that distinguish possible sources from one another. While classification does not unequivocally demonstrate a connection with an individual or a single source, it can be used to reduce the number of possible sources and thus can provide important leads in an investigation. The development and application of microbial forensics was essential to the FBI’s scientific investigation, which relied heavily on genetics and comparative genomics to classify the spore materials used in the attack, reduce the number of possible sources and suspects, and provide investigative leads. In fact, according to the NAS, this investigation accelerated the development of the then nascent field of microbial forensics. The FBI’s investigation, assisted by government, university, and commercial laboratories, was an effort to develop the physical, chemical, genetic, and forensic profiles of the anthrax spores in the letters and envelopes used in the attacks. The investigation employed myriad traditional and novel investigative and scientific methods. The scientific methods involved efforts to develop the physical, chemical, genetic, and forensic profiles of the anthrax spores and letters and envelopes used in the attacks so as to identify the source of the spores. The FBI faced many difficult and complex scientific challenges over the course of this investigation, according to the NAS. New microbial forensic methods were developed and implemented over several years, and some of them provided valuable evidence and significant leads in the case. For example, according to the FBI, new methods to determine the source of the growth media for the mailed spores were inconclusive while the use of the genetic mutations provided an investigative lead. By October 2001, CDC had identified the microorganism used in the attack as Bacillus anthracis (B. anthracis). This was a key step in the classification of the microorganism used in the attack letters and was one of the first scientific findings that allowed the FBI to begin to reduce the number of possible sources of the spores. B. anthracis is a gram positive, rod-shaped bacterium that causes the disease anthrax. It is a member of the larger genus Bacillus that includes other commonly found species, such as B. cereus, B. subtilis, and B. thuringiensis. B. anthracis, a species of Bacillus that can be found on all continents except Antarctica, typically shows little genetic variation among isolates. However, during the investigation scientific methods were being developed that allowed scientists to find some genetic differences among natural isolates of B. anthracis. Applying these methods allowed the FBI to refine the classification of the spores used in the attack and further reduce the number of possible sources of the spores. Even in the most homogeneous species, some differences are usual in genome sequences among populations. Although few in number, these differences are sufficient to characterize subgroups, or “strains.” in the attack letters. In fact, this scientific evidence allowed the FBI to focus its investigation on the limited number of laboratories that had had access to the Ames strain before the attacks (see figure 1). While classifying the spore material as the Ames strain was instrumental in reducing the number of possible sources, it was not sufficient by itself to definitively identify the source of the material used in the attack as a single laboratory, flask, or person. The FBI then sought to identify additional characteristics of the spores used in the attack that could further discriminate between possible sources of the Ames strain. Scientists from the Department of Defense (DOD) tested samples of the spore materials found in the letters and identified several morphological variants (or morphs).the spores from the letters over an extended period observed that a small percentage of the colonies differed in texture, color, and growth patterns A laboratory technician who had grown (cultured) from those typical for the Ames strain of B. anthracis, referred to in figure 2 as the “wild type.” In an effort to identify the source of the letters, investigators and FBI scientists began to evaluate whether they could first identify and characterize these morphs genetically and then determine whether any of them were present in the repository of Ames samples. This involved genome sequencing to identify whether specific deoxyribonucleic acid (DNA) sequences underlay the morphs. Eventually, as shown in figure 2, the morphs were associated with several types of genetic mutations: duplications, single nucleotide polymorphisms (SNP), and deletions, referred to as INDELS. Afterward, over several years, outside contractors’ laboratories developed and validated several genetic tests to analyze the B. anthracis samples for the presence of certain genetic mutations. Specifically, the testing revealed the presence or absence in a sample of a specific DNA sequence (that is, the genetic mutation) associated with a given morph. The FBI contractors generally referred to the tests they developed as A1, A3, D, and E. Commonwealth Biotechnologies (CBI) developed the two A tests (A1 and A3), which targeted two different DNA sequences; and the Institute for Genomic Research (TIGR) developed the E test, targeting another DNA sequence. However, unlike the others, the Illinois Institute of Technology Research Institute (IITRI) and Midwest Research Institute (MRI) both developed a test targeting the same DNA sequence. For clarity, we refer to the IITRI-developed test as D-1 and the MRI- developed tests as D-2. Genetic test A1: detects the presence of a specific duplicated DNA sequence associated with morph A; Genetic test A3: detects a different duplicated DNA sequence from that targeted by A1 but also associated with morph A; Genetic test D-1: detects the presence of a specific deleted DNA sequence associated with morph D; Genetic test D-2: detects the presence of the same deleted DNA sequence associated with morph D as that targeted by the D-1 test; and Genetic test E: detects a deleted DNA sequence associated with morph E. In 2002, the FBI began collecting samples from laboratories in possession of the Ames strain to compare them with the material used in the attack. A grand jury issued subpoenas to 16 domestic laboratories and the FBI requested submissions from 3 foreign laboratories that investigators had determined possessed the Ames strain. The subpoenas required each laboratory to identify and submit two representative samples from each distinct stock of the Ames strain it held. The subpoena included instructions to the laboratories on how to identify, select, and submit samples to the FBI. Laboratories were required to ship sample submissions to DOD scientists at USAMRIID for preparation and entry into the FBI repository of Ames samples. In addition to the samples submitted in response to the subpoena, searches were conducted at three domestic laboratories to ensure that samples were taken from each stock of Ames strain in those facilities. The FBI assembled a repository of 1,070 Ames strain samples, of which From 2004 through 2007, each of the 1,059 viable 1,059 were viable. repository submissions was compared to the evidentiary material using the five genetic tests (see figure 3). The results of the genetic testing indicated that only 8 of the 1,059 FBI repository Ames samples tested positive for the presence of the four genetic mutations originally found in the anthrax letter evidence. Using submission records, investigators concluded that these 8 samples were derived from a single source—a flask identified as RMR-1029. According to the FBI, this information constituted a groundbreaking lead in the development in the investigation. It allowed the investigators to reduce drastically the number of possible suspects, because only very few individuals had ever had access to this specific flask. Armed with this new information obtained from the scientific evidence, the task force focused its investigation on researchers who had had access to the laboratory at USAMRIID where RMR-1029 was stored. In 2008, the FBI sought to conduct statistical analyses in order to determine (1) the probative value of genetic markers found in a sample, and (2) possible inferences regarding the relationships of similar samples. A contractor submitted the Final Statistical Analyses Report in October 2008. In February 2010, the FBI closed the case concluding that a scientist at USAMRIID had perpetrated the attack alone. Neither the case nor the totality of the evidence, including the scientific evidence that provided the FBI with valuable leads, was brought to trial in a court of law. The alleged perpetrator of the attack died on July 29, 2008, from an overdose of over- the-counter medication. At the start of the investigation no standards or guidelines existed for verifying and validating microbial forensic methods, including the polymerase chain reaction (PCR)-based tests that were eventually used to identify the genetic mutations in the repository samples. The first contractor, Commonwealth Biotechnologies (CBI), an established forensics laboratory, had begun developing the A1 and A3 genetic tests in 2002. A CBI official stated that it had relied upon the National Institute of Standards (NIST) guidance on the validation of methods for detecting human DNA, and also on the DNA Advisory Board standard for forensics. For the remaining three contractors, the Illinois Institute of Technology Research Institute (IITRI), Midwest Research Institute (MRI), and The Institute for Genomic Research (TIGR), the FBI provided “Quality Assurance Guidelines for Laboratories Performing Microbial Forensic Work”—guidelines that the FBI’s Scientific Working Group on Microbial Genetics and Forensics (SWGMGF) developed and published in October 2003. These guidelines defined validation as a process by which a test procedure is evaluated to determine its efficacy and reliability for analysis.confidence in the ability of the test (the measurement tool) to accurately identify the properties of interest in samples that are to be analyzed. Verifying and validating a test method provides a level of Verification, confirms by objective evidence, from laboratory experiments, that the given test meets the user’s specific requirements, such as criteria for accuracy. If the verification testing were not to produce consistent results, then the scientist or the laboratory would have to return to the optimization phase to further refine the method and materials and then revise the test’s standard operating procedure (SOP) accordingly. Verification of the acceptance criteria must include repeated testing to account for measurement uncertainty, and confidence in performance statistics should be reported. Depending on the intended use of a test, sensitivity, specificity, limit of detection (LOD), reproducibility, bias, and precision may all be measures of performance (performance statistics) that should be evaluated. The type of test (qualitative, quantitative, or semi-quantitative) may also determine which of these performance parameters is to be evaluated. The testing protocol and materials, including quantities that optimize test performance are recorded in a test’s SOP. Validation confirms by examination, from laboratory experiments, and the provision of objective evidence that the particular requirements for a specific intended use are fulfilled. Successful validation offers some assurance that a given genetic test is sufficiently robust to provide reproducible results, regardless of the practitioner, agency, contractor, or laboratory applying it to a sample. Validation is frequently used to connote confidence, but it may also be thought of as defining the limitations of a method. Studies are conducted that enable the estimation of the limits of the procedures and the measurements of the test. To the extent possible, the validation of a method should mimic “real world” conditions. The limits of the method must be known, demonstrated and documented. In essence, validation measures the uncertainty in the test output. In 2007 the FBI convened a team of scientists to review selected scientific methods used in the case. Referred to as the AMX Red Team, it was asked to assess whether the science used was sound and to consider what additional tests might be performed to benefit the investigation. The team, finding no shortfalls or deficiencies in the basic methodologies it reviewed, concluded that the “genetic signatures correlating with specific morphs were valid tools for eliminating those repository samples not closely related to the spores used in the attack.” However, the team also stated that the extent of research and development of the genetic tests at the date of its review was insufficient to determine whether the presence or absence of one or several of the morphs in a sample was associated with the evidence, was merely characteristic of normal culture practices, or possibly was affected by the genetic tests’ sensitivity of detection. The team recommended additional studies to characterize the genetic markers as a function of growth conditions, including the influence of growth time, growth media, and temperature. It also recommended additional evaluation of the sensitivity of detection of each genetic test to ensure a reliable interpretation of analyses. In 2008, the FBI asked the National Research Council (NRC) of the National Academy of Sciences (NAS) to review the scientific approaches it had used to support its conclusions. In 2011, the NAS issued its report, concluding that “it is not possible to reach a definitive conclusion about the origins of the B. anthracis in the mailing based on the available scientific evidence alone.” Additionally, the report included the following findings related to the development and validation of the genetic tests: “Specific molecular assays were developed for some of the B. anthracis Ames genotypes (those designated A1, A3, D, and E) found in the letters. These assays provided a useful approach for assessing possible relationships among the populations of B. anthracis spores in the letters and samples subsequently collected for the FBIR. . . . However, more could have been done to determine the performance characteristics of these genetic tests. In addition, the assays did not measure the relative abundance of the variant morphotype mutations, which might have been valuable and could be important in future investigations. . . .” “The development and validation of the variant morphotype mutation assays took a long time and slowed the investigation. The committee recognizes that the genomic science used to analyze the forensic markers identified in the colony morphotypes was a large-scale endeavor and required the application of emerging science and technology. Although the committee lauds and supports the effort dedicated to the development of well- validated assays and procedures, looking toward the future, these processes need to be more efficient.” Additionally, the NAS report included the following findings related to the statistical approach taken to quantify the significance of finding the genetic markers in a small number of repository samples: “The results of the genetic analyses of the repository samples were consistent with the finding that the spores in the attack letters were derived from RMR-1029, but the analyses did not definitively demonstrate such a relationship.” . . . . . “Some of the mutations identified in the spores of the attack letters and detected in RMR-1029 might have arisen by parallel evolution rather than by derivation from RMR-1029. This possible explanation of genetic similarity between spores in the letters and in RMR-1029 was not rigorously explored during the course of the investigation, further complicating the interpretation of the apparent association between the B. anthracis genotypes discovered in the attack letters and those found in RMR-1029.” We found that the genetic tests used to screen the FBI’s repository of B. anthracis samples demonstrated through the verification and validation testing that they generally met the FBI’s minimum validation requirements. However, the FBI’s validation procedure did not require and the tests did not demonstrate a level of statistical confidence for interpreting the validation results. Also, tests conducted after validation— although not required by the quality assurance guidelines provided to the contractors—yielded valuable information on the performance characteristics of the genetic tests. Therefore, by not having a comprehensive validation approach, or framework, that sets out consistent steps for achieving minimum performance standards, and includes an assessment and measurement of the uncertainty in the test performance (see table 1 for the phases of a validation framework), the FBI cannot have statistical confidence in its validation test results. Knowledge of uncertainty is essential for subsequent statistical analysis that can provide quantitative measures of confidence in conclusions drawn from tests applied to forensic samples. According to DHS, it now validates methods and tests used to support FBI investigations and has an established ISO-accredited program. In our review of scientific literature and agency and industry standards along with guidelines regarding the verification and validation of methods for analyzing both microbial and human DNA, we found that terminology and the extent of verification and validation differed across industries. However, we identified three distinct phases in genetic test development: (1) optimization, (2) verification testing, and (3) validation testing. While the literature and various validation standards and guidelines that we reviewed identify the specific types of tasks for each phase, we found that a clear boundary does not always exist between the first two phases. That is, optimization and verification are sometimes treated as a single continuous process. Further, we found that verification and validation are sometimes used interchangeably to describe the same process. Thus, the process could combine either optimization and verification or optimization and validation. Nevertheless, what is important is that the approach, or framework, to test development generally includes these phases and the associated key tasks, as shown in table 1. The FBI set limited performance requirements for the genetic tests and relied on the contractors’ expertise to determine the processes they would use to develop (i.e., optimize and verify) their tests (see table 1 for types of performance parameters to be evaluated). According to the FBI, it provided minimal direction to the four contractors on how they were to develop their genetic tests in order to allow creative development. It stated that, with a few exceptions, it left the development mostly to the contractors who were experienced in developing tests. However, we found that the contractor’s approaches differed in their (1) use of verification and validation guidelines, (2) steps in conducting optimization and verification testing, and (3) interpretation criteria for results generated by the genetic tests. The FBI required that the genetic tests detect the target mutations in an overwhelming background of bacteria consisting of predominantly wild type B. anthracis Ames, which had been found in the evidentiary material (that is, the letters). Further, the FBI specified that sensitivity was to be demonstrated by the LOD—that is, the lowest concentration level that can be reliably detected for a qualitative and quantitative test. The FBI did not require a specific calculation or value for the LOD. According to the FBI, it was looking for the presence or absence of the morphs (genetic mutations) in the repository samples and the LOD was an important factor. Three contractors developed qualitative tests; the fourth developed a semi-quantitative test. In this regard, the FBI wanted to know the lowest concentration at which the genetic tests could detect the presence of a specific genetic mutation in a sample. Specificity was to be demonstrated by the detection of the target in a sample containing an overwhelming background of predominantly B. anthracis Ames. We found that the contractors evaluated other performance parameters at their discretion (see appendix II). We also found that standards for the verification and validation of microbial forensics methods did not exist at the start of the investigation and were only limited after it had begun. At that time, more was known about verifying and validating human DNA testing methods for forensics than about microbial forensics methods, as reflected in the revised quality assurance guidelines. In addition, we found that the contractors’ disparate experience and the FBI’s minimal instruction to them contributed to the differences in their expectations and approaches. Most of the contractors had worked for other federal agencies whose processes differed and thus their approaches to optimizing and verifying their genetic tests differed. While most of the contractors had developed methods for the federal government, one contractor said that each of its federal sponsors had its own processes for validation and that it followed a particular agency’s processes when working with it. The contractor also stated that its own internal quality assurance guidelines were more stringent than the SWGMGF guidelines for validation. One contractor was a forensics laboratory that was familiar with analyzing human DNA samples and using associated quality assurance standards, including the DNA Advisory Board standards. Another contractor was engaged in genomic research. Finally, the FBI stated that it was more confident after the two A tests were developed; it had required the contractor for the two A tests to subject material from each polymerase chain reaction (PCR) well to genetic sequence analysis, regardless of the result (positive or negative). Further, it stated that after the first four genetic tests were developed, it had been unsure as to whether it wanted to proceed with the last one— the E test. We found that the contractors generally conducted the tasks we identified in table 1 under the first two phases—optimization and verification—to develop the genetic tests and determine their performance, although one did not conduct a verification test. Specifically, CBI conducted an “internal qualification” study that is included in the SWGMGF guidelines. CBI’s qualification study involved multiple experiments using internally blinded samples following an SOP to determine whether (1) the A1 and A3 genetic tests could correctly identify the targeted genetic mutations and (2) the staff involved could be considered qualified to perform the genetic tests. The first appeared to be equivalent to verification testing and the second to proficiency testing. Similarly, both MRI and ITRI conducted internal verification testing and followed it with a qualification test of laboratory personnel. While the distinction between, interpretation of, and expectations for the verification and any qualification testing were not always explicit in the documentation we were provided, we found that they were both intended to precede the validation testing. However, TIGR—the last contractor to develop its genetic test—did not conduct the equivalent of either a verification or a qualification study. The FBI indicated that it believed that the contractors had conducted verification testing but acknowledged that it was possible that one had not been conducted for the last genetic test that was developed. Thus, the verification testing was not consistent for all the tests—with one relying solely on the validation testing to determine whether it met the FBI’s requirements and also was fit for use on the repository samples. We also found that there was no clear rationale for the lack of complementary interpretation criteria for the results generated by the two genetic tests that targeted the D mutation, which proved problematic during the repository screening, after verification and validation had been completed. Each contractor independently developed interpretation criteria for positive, negative and inconclusive results through laboratory experimentation, which when defined became part of its SOP. Initially, for the A1 and A3 tests interpretation criteria were for a positive or negative result only. For the A1 and A3 tests, validation results were reported as the number of correct positive and negative results for the FBI-provided samples, and excluded blind samples for which, at this stage of the investigation, it was not yet known whether they contained the targeted genetic mutations.were reported as the number of correct positive and negative results, detection limit, false positive rate, and inconclusive rate. For the D-1 and D-2 tests and the one E test, results Criteria for an inconclusive result included several types of occurrences that varied by the particular genetic test. The FBI stated that it reviewed the interpretation criteria for each genetic test. However, after the repository screening, disparate interpretation criteria for the D-1 and D-2 genetic tests determined the samples that had usable test results. Ultimately, contradictory interpretations of the D-1 and D-2 test results were a reason for eliminating the results of the D-1 genetic test’s screening of the repository samples; thus, the D-1 results were not part of This issue did the final statistical analyses, according to the NAS report.not surface during the validation of the genetic tests. We found that (1) the genetic tests used to screen the FBI’s repository of B. anthracis samples met the FBI’s validation requirements, (2) the validation tests were not required to and did not demonstrate a level of statistical confidence for interpreting the validation test results, and (3) some information on the sensitivity and specificity of the genetic tests was not characterized until after validation (postvalidation testing). As a result, the performance characteristics of the genetic tests were not fully understood when they were applied to the repository samples and more could have been done to strengthen the quality of the data and ultimately the validation results. The validation test results showed that the genetic tests met the FBI’s requirements in that they were able to detect the targeted genetic mutation when it was present at a low level (that is, at less than 1 percent, in a validation sample containing predominantly B.anthracis Ames), although no measurement of statistical confidence in these results was provided. As shown in table 2, the A3 genetic test had the lowest LOD, at 0.001 percent, while the others ranged from 0.005 percent to 0.01 percent. The validation results led the FBI to determine that there were no false positives for any of the genetic tests, and that the inconclusive rates were 0.12 percent and 0.02 percent for the D-1 and D-2 tests, respectively. During validation, inconclusive rates could not all be computed for all the genetic tests. Since the FBI’s requirements stated that the LOD was to be used as a measure of sensitivity, it was an important measure of the performance of a given genetic test. LOD provides interpretations of results generated by a genetic test with the known limitations of such data, but it is a difficult quantity to estimate reliably. Our review of existing and current guidelines for validation suggests that using an appropriate estimate of LOD does provide a reliable measurement of sensitivity, but LOD estimates, like any performance statistic, should be reported with some measure of confidence. For example, the Environmental Protection Agency defines the method detection limit as the “minimum concentration of substance that can be measured and reported with 99-percent confidence that the analyte concentration is greater than zero.” If test sensitivity is an important performance criterion, then both verification and validation procedures for a genetic test should report LOD, along with a measure of confidence. However, the LODs for the genetic tests the four contractors performed neither required a confidence measure nor determined one by using statistical measurements of confidence. Validation testing should to the extent possible simulate the conditions of the intended use of a given genetic test, using known case samples (see tasks under the third phase in the validation framework in table 1). Calculating uncertainties of measurement is also an important task. All steps in validation testing, such as sample collection, sample preparation, transport, storage and analysis can introduce stochasticity and increase uncertainty in the test results. Most such stochasticities (and others) will also affect the testing of repository samples. These additional uncertainties can be measured and understood using repeated (replicate) experiments including all relevant steps (from collection to analysis) of samples with known concentration levels. By designing a validation study with a sufficient number of replicate samples, the FBI could have quantified the level of statistical confidence in the sensitivity and specificity of the tests. While the SWGMGF guidelines did not require them, tests as part of validation that examined stochastic (random) effects of the process would have made it possible to draw more rigorous conclusions, with measures of confidence, regarding the test results for the repository samples. addition, we found that additional information on the sensitivity and specificity of the PCR-based genetic tests was characterized during postvalidation testing that the FBI’s expert advisers recommended. Our analysis of these post-validation test results suggests that the negative rates of the genetic tests were high for samples that could be expected to contain the genetic mutations when using the sample collection and processing methods as required for the repository samples and that there were stochastic (random) effects in the repository screening process. The SWGDAM guidelines do suggest such tests for PCR-based methods, which is important when samples contain low concentrations of the target to be detected. Sampling fluctuations can occur in PCR-based tests. According to the 2004 SWGDAM guidelines, for PCR-based assays, validation studies must address stochastic effects and sensitivity levels. sequence, growth conditions would vary slightly. However, according to the FBI, the purpose of the additional testing was to determine whether stochastic sampling error had been introduced into the repository preparation process as instructed in the subpoena. Therefore, the postvalidation tests were to determine whether the procedures by which the repository samples were processed could affect the accuracy of the interpretation of the data. The postvalidation tests were conducted in August and September 2007 under conditions that closely mimicked the intended use for each of the genetic tests. According to the FBI, the screening of the repository samples with the genetic tests was about three-fourths complete when this testing took place. Specifically, in the postvalidation tests, the contractors applied their genetic tests to replicate samples derived directly from some of the evidentiary material—including flask RMR-1029. The results revealed that the genetic tests did not always detect the genetic mutations in samples that had been derived directly from the evidence and thus were expected to contain all four mutations—a best-case scenario. Our evaluation of measures of the sensitivity and specificity of the genetic tests revealed differences between the validation and postvalidation test results. Regarding sensitivity, under the assumption that undiluted samples from flask RMR-1029 are positive for all four genetic mutations (supported by the preponderance of genetic and non-genetic data), we can estimate the negative rate as the frequency of negative results in replicate tests of undiluted samples from RMR-1029. Validation testing showed that for those results expected to be positive, no negative results were observed at or above the LOD for any of the genetic tests. However, in the postvalidation testing, the negative rates were generally high. As shown in table 3, the negative rates for the postvalidation tests ranged from 0 percent to 43 percent for the undiluted samples from flask RMR-1029. (Appendix III breaks down the results of the replicate testing for each genetic test.) The NAS report stated that the FBI did not address false negative results and inconclusive results, and it was concerned about the restriction of the statistical analyses to the repository samples that had no inconclusive or variant results.results of only D-2 were used in the FBI’s analysis of the repository screening—that is, the analysis was restricted to the 947 samples that contained no inconclusive or variant results, which resulted in the exclusion of 112 samples from the analysis. Thus, the knowledge about sensitivity and specificity obtained by the replicate testing, as well as ensuring that these two genetic tests’ interpretation criteria were complementary, would have been more useful if it had been completed in the validation process. Of the two genetic tests that targeted the D mutation, the Regarding measures of specificity, the effect of the repeated analysis of the undiluted nonpositive samples during the postvalidation testing showed evidence of a nonzero false positive rate for the D-2 genetic test. As shown in table 4, the 3.3 percent false positive rate for the D-2 genetic test demonstrates the likelihood of a random effect in the postvalidation tests that was not apparent from the validation results. Although not a requirement at the time, repeated testing—such as that conducted postvalidation—would have provided additional information on the performance of the genetic tests. We recognize that neither the FBI nor the SWGMGF guidelines required contractors to conduct replicate tests of case samples to identify the stochastic (or random) effects of the genetic tests when they were used under realistic test conditions to further evaluate the genetic tests’ sensitivity—an important step in validating PCR-based genetic tests. In contrast, the SWGDAM guidelines suggested using experiments to determine the sensitivity of real-time PCR-based tests as a part of validation. Importantly, while the LOD is a critical performance indicator for a genetic test, LOD calculations do not account for the data that PCR-based tests sometimes generate but that The FBI also stated that during the development of the are not typical.genetic tests it was concerned that stochastic effects might be a problem, stating, for example, that it had discussed its concerns with the contractors about evidence growth steps and the possible stochastic effects, that is, in the context of the growth rates of the wild type cells (B. anthracis Ames) versus the morph cells in culturing, among other things. The postvalidation tests were able to estimate valuable performance statistics of the genetic tests and under more realistic testing conditions than the original validation tests. More extensive validation testing could have reduced uncertainties in the testing procedure. For example, the sensitivity of a given genetic test relies on the sampling procedures, the rarity of the targeted genetic mutation in a sample, and other factors that vary by genetic test. Incorporating these types of tests into the validation would have resulted in more information on the uncertainties inherent in the use of the genetic tests and would have been a way to simulate the conditions of their intended use. Future validation efforts would be strengthened by including experiments designed to identify and eliminate likely uncertainties in test performance. The differences we have highlighted regarding the contractors’ approaches to verification and validation indicate that the use of a comprehensive validation framework could help ensure greater consistency. Such a framework would need to specify the defined level of statistical confidence to be calculated for the interpretation of validation results before they are applied to evidentiary samples. Minimally, the statistical confidence achievable in each test should be estimated during validation. The development of such a framework could be facilitated by DHS’s National Bioforensics Analysis Center (NBFAC), which validates tests used to support FBI bioforensic investigations. According to DHS, NBFAC will take steps to ensure that the results it generates will meet Daubert standards for “appropriate validation” and third party review and will thus meet admissibility requirements for evidence in federal court proceedings. NBFAC—an ISO 17025 accredited forensic laboratory—is experienced in working with multiple outside laboratories to verify and validate their methods. It has an established ISO 17025 accredited process. The combination of limited communication among the contractors, varied timing in the validation efforts, uncertainties the FBI faced as the investigation unfolded, and increasing knowledge about the repository samples made it clear, with hindsight, that the contractors’ verification and validation approaches were likely to differ. Thus, in the future, standardizing the approach to verification and validation testing—by the means of a validation framework—would be more efficient, especially in clearly communicating expectations to multiple contractors. In contrast to 2001, DHS’s NBFAC validates assays (or tests) that can be used to support FBI bioforensic attribution investigations. Generally, the NBFAC validation process involves the evaluation of methods transferred from others, such as DOD and academic laboratories, and sometimes the development of a new method. For forensic tests, NBFAC and the FBI are provided with a “validation package” for each test that encompasses data on testing previously conducted during the development stage or before the transfer to the laboratory.have to provide information on the performance parameters (e.g., accuracy, LOD, precision) that they have previously verified. Next, NBFAC conducts its own test, evaluation, and validation of the transferred method. When evidence stemming from the use of validated methods is needed as evidence in court, it must be defensible by meeting evidentiary standards. According to DHS, developers Questions may be raised in court about the standards used for the validation of such methods. Results generated by forensic methods, including microbial forensics, must meet a high standard. According to NBFAC, to ensure that results generated by a validated test will meet Daubert standards for “appropriate validation,” the deliverables from the Bioforensics R&D Program include SOPs for the methodologies and technical and peer-reviewed published reports. Also, quality project performance plans are required of researchers, who must define method performance parameters to provide a baseline for verification and further validation if required by law enforcement. accreditation of tests requires the demonstration of previously described method parameters in NBFAC laboratories with trained staff followed by a third-party review of the supporting data, procedures, equipment, and staff training that supports ISO 17025 accreditation. In this context, a method that has been validated elsewhere and that is transferred would be evaluated to ensure that the NBFAC successfully used the method as intended in the NBFAC laboratory. Performance parameters include accuracy, precision, specificity, selectivity, LOD, limit of quantitation, linearity, ruggedness, and robustness. We identified six characteristics of a statistical framework that would strengthen the significance of microbial forensic evidence. When we compared the FBI’s statistical approach to these six characteristics, we found that three could be improved to strengthen the significance of its evidence for future investigations. That is, the FBI (1) could do more to understand the methods and conditions that give rise to the chosen genetic markers, (2) institute more rigorous controls over sample identification and collection, and (3) include measures of uncertainty when interpreting the results. We found that the FBI has taken some steps to include such expertise in future investigations by building formal forensic statistical expertise both internally and externally. Although not always possible, an important goal of a microbial forensic investigation is to generate meaningful comparative analyses of evidentiary samples and suspect samples to establish their relatedness or to exclude suspect samples from an investigation. Statistically meaningful comparative analyses can allow the use of statistical inferences relating to the process to produce the sample, the provenance of a sample, or the relatedness of samples. The significance of such statistical inference relies on the analyst’s ability to quantify both the confidence in test results and the frequency with which results match. Confidence, in this context, refers to the level of reliability and accuracy investigators assign to the test results obtained from the measurement tools used to identify the properties of interest in the samples. The frequency of the sample properties’ presence, or generation in a relevant population of possible sources, is a measure of how common or rare the properties are and provides context to the probative value of the evidence. According to a 2009 NRC report, a statistical framework is needed to quantify the probative value of forensic evidence in terms of the frequency of that evidence in a population. Formulating an appropriate statistical framework that is adequate for all microbial forensic investigations is not feasible because the diversity of many potential pathogens is unknown or, at best, difficult to describe. For this reason, frameworks must be adapted to the specific circumstances of each case. As shown in table 5, our review of scientific literature in forensic science, statistics, epidemiology, and population genetics identified the six general characteristics that a framework needs for statistically meaningful comparative analyses of the attack material to repository samples for the specific set of circumstances of the FBI’s investigation. First, a definition of what constitutes a matching type should be clearly established. A genetic signature, or a set of genetic markers, can be chosen to establish a genetic type (or genotype) that is used to differentiate the samples. The genetic signature should be sufficient to identify the target of interest at the resolution needed for an investigation. In this case, the target of interest was the B. anthracis Ames strain, capable of producing spores with a set of specific genetic markers linked to morphs observed after a prolonged period of growth. The requisite resolution was the ability to differentiate among the individual stocks (or collections of organisms) of B. anthracis. Determining that two or more samples have a matching type must take into account the source of the organisms (for example, nature or the laboratory), the stability of genetic markers, storage conditions, and conditions giving rise to the markers. Specific growth or environmental conditions may selectively advantage or disadvantage mutations and affect the stability of genetic markers. Therefore, if the significance of a matching genetic signature is to be understood, the genetic markers should be well characterized, and the conditions giving rise to the presence of markers in a sample should be understood. Second, once the genetic signature has been established and a match has been clearly defined, it is then necessary to identify and define the population of relevant sources that may have the genetic signature in order to understand how common or rare the genetic signature is. This relevant source population is critical in identifying the probative value of any match or nonmatch between samples. In a criminal investigation, a relevant source population may be considered the population of suspects, and it should be defined as specifically as possible to identify the smallest population related to the evidentiary material. The definition of the relevant source population should be based on the population related to characteristics of the evidence and not on characteristics of a suspected source. The relevant source population in this case is all stocks of B. anthracis that could have been used to grow the material used in the attack letters. In defining the source population, the structure of the relevant source population of bacteria should be understood. When a population is divided into subgroups that do not mix freely, that population is said to have structure. In this case, the relevant reference population of stocks of B. anthracis was highly structured among the laboratories included in the investigation. The lack of independence between stocks in a structured population affects inferences about the evidentiary material and its most and least likely sources. Third, in order to quantify or estimate how common or rare a genetic signature is in the relevant source population, a database that accurately represents the relevant source population’s genetics should be created. The extent to which the database reflects the population will affect the accuracy of the match probability. The size and quality of the data in the database will affect the power of match probability, determining the potential probative power of the signature for distinguishing one source from another. A large and comprehensive database is the theoretical goal but in most cases may not be possible. However, in this case, the FBI determined that it was possible to identify all sources of the B. anthracis Ames strain, and it set out to create a comprehensive database. For completeness the genetic information in the database should have included samples from all sources of the B. anthracis Ames strain. In such cases, the database should be complete—excluding sources results in underrepresentation—and should avoid duplication (although replication can be beneficial)—unknowingly including sources more than once results in overrepresentation. Methods used to select samples from each stock should be adequate to ensure representation of the organisms within each stock. In an ideal situation, the database of genetic information should be constructed to the same quality standards as the actual evidentiary analysis. These quality standards should apply to the selection of samples from stocks to the results of the genetic tests. Fourth, the limitations of the measurement tools used to generate the genetic information in the database should be identified. When quantitative inference is attempted, care must be taken not to overemphasize data; the limits of the methods used to generate the data should be considered. The power and limitations of microbial forensics methods need to be understood through validation. Validation frequently connotes confidence, but it may be thought of as defining the limitations of the method. This does not mean that a method must be 100 percent accurate to be useful. Studies should allow the estimation of the limits of the measurements. The limits of the methods must be demonstrated and documented for all steps in the process, including sample collection, preservation, extraction, analytical characterization, and data interpretation. Fifth, the choice of statistical methods should be appropriate for the data and should properly account for the mode of inheritance of the genetic markers and any structure in the populations. An important aspect of computing association statistics and probability estimates is properly accounting for the mode of inheritance. Methods appropriate for computing probability estimates and statistical tests of significance differ by the mode of inheritance of the genetic markers. In organisms that reproduce asexually, such as B. anthracis, genetic diversity is driven by mutation processes, not by random mixing. Computing match probabilities using methods that assume independence and random mixing within populations is not appropriate because the genetic variation in such organisms is highly correlated. In organisms that reproduce asexually, the frequency of a particular genetic type in the population must be determined by direct observation. The frequency of the evidentiary genotype in a relevant source population can be based on counting the number of times the genotype is observed in a reference database. The strength of this approach is affected greatly by the genetic database and whether it has sufficiently sampled relevant populations. Sixth and finally, the interpretation of results should include quantifications of uncertainty. It is crucial to clarify the type of question the analysis is addressing when evaluating the accuracy of a forensic analysis. Although some techniques may be too imprecise to permit the accurate identification of a specific individual, they may still provide useful and accurate information about questions and classification. The interpretation of results will be stronger with the proper use of statistical and probabilistic analyses, but the strengths and weakness of any result should be communicated. Results should indicate the uncertainty in the measurements, and studies must be conducted that enable the estimation of those values. We believe that the six general characteristics described above make up a comprehensive statistical framework that could have allowed the FBI to quantify significance and probative value of the scientific evidence collected in a statistically meaningful way and could have strengthened the evidence it collected. However, we found that at the outset of its investigation, the FBI did not have a comprehensive framework that would allow for statistically meaningful comparative analyses between samples from the attack letters and samples in the FBI repository of B. anthracis Ames strain. Specifically, we found that the FBI’s approach to three of the six characteristics could be improved to strengthen the significance of evidence in future investigations. Although the specific genetic mutations used as genetic markers to determine a match or exclusion were adequately characterized, the FBI did not conduct studies to understand the methods and environmental conditions that gave rise to the mutations. The FBI convened a team of scientists in 2007 to review the scientific methods. Finding no shortfalls or deficiencies in the basic methodologies they reviewed, they determined that the usefulness of the genetic markers was sufficient. The team also stated that the extent of research and development of the genetic tests at the date of their review was insufficient to determine whether the presence or absence of one or several of the genetic markers was associated with the evidence, was merely characteristic of normal culture practices, or possibly was affected by the sensitivity of detections of the genetic tests. The team recommended additional studies to characterize the genetic markers as a function of growth conditions, including the influence of growth time, growth media, and temperature. In response to questions from the NAS panel about this recommendation, the FBI stated that it considered such studies academic and did not conduct the recommended research. Consequently, experimental data are missing that would have shown the frequency with which particular genetic mutations occur under growth conditions that could affect their retention or loss. In its report, NAS opined that some of the morphs used as genetic markers might have arisen independently from RMR-1029. According to the report and experts we spoke with, the genetic markers might have had a selective advantage under growth conditions used for large-scale production of spores, such as in a fermenter or in a batch culture. If so, the presence of the genetic markers would be a function of the growth conditions rather than direct derivation from parent material, such as RMR-1029. This is problematic for the quantification of the rarity of the results because it is not possible to calculate the probability of two independent cultures having the same genetic markers if either was subjected to growth conditions that provide selective advantage or disadvantage. Without the experimental data, the usefulness of the genetic markers as an identifying signature to determine a match or exclusion was not fully understood. For example, it is not known whether the genetic markers could have arisen independently. To identify repository samples that received a direct or indirect transfer from the laboratory that possessed RMR-1029 after it was created in 1997, we examined the FBI’s documentation of historic transfer records of B. anthracis Ames strain between laboratories from 1981 through 2001. We supplemented this with information from laboratory officials and researchers we interviewed. Then, we compared the frequency of positive genetic markers in these groups of samples to the 119 samples that we verified were independent of transfers from the laboratory that possessed RMR-1029. Our analysis of repository data found no evidence of independent evolution in three of the four genetic markers (A1, A3, and E). However, we found that repository samples with no direct or indirect relationship to RMR-1029 tested positive for the D genetic marker at rates similar to those of the samples that were submitted from laboratories with direct transfers from the laboratory that possessed RMR-1029. table 6, the D genetic marker was detected in about 6.6 percent of the repository samples submitted from laboratories with direct transfers from the laboratory that possessed RMR-1029 compared to 6.7 percent of the samples that were independent of the laboratory that possessed RMR- 1029. Additionally, the NAS report found that in repository samples associated with experiments conducted before the 2001 attacks, the D genetic marker was the only marker detected and it occurred in about 1 percent (3 of 296) of those samples. This provides additional evidence that the D genetic marker may have arisen independently of RMR-1029. Additional studies recommended to the FBI that it did not conduct could have provided the experimental data needed to fully understand the probative value of this genetic marker. Because the FBI adequately identified the relevant source population as all stocks of B. anthracis Ames strain, it significantly reduced the number of possible sources. The NAS report found that the dominant organism in the letters was correctly and efficiently identified as the Ames strain of B. anthracis. The science performed on behalf of the FBI for identifying the Bacillus species and B. anthracis strain was appropriate, was properly executed, and reflected the contemporary state of the art. The correct identification of the specific strain of B. anthracis allowed the FBI to adequately define the relevant source population as stocks of the Ames strain in laboratories that had the Ames strain in their inventories before the attacks. This significantly reduced the number of possible sources. We found that the FBI’s effort to create a comprehensive repository containing samples from all known stocks of the Ames strain of B. anthracis was appropriate for assessing the rarity of the genetic markers in the relevant source population. Its adequacy, however, was affected by the incompleteness and inaccuracy in the repository. The NAS report found that the repository was not optimal for a variety of reasons. It stated, for example, that the instructions in the subpoena issued to laboratories for preparing samples were not precise enough to ensure that they would follow a consistent procedure for producing samples that would be most suitable for later comparisons. Our analysis of FBI documents shows that FBI searches at three specific laboratories identified hundreds of additional relevant stocks that laboratories did not submit to the repository in response to the subpoena. Specifically, we found that the FBI collected about 29 percent of the 1,059 repository samples through these searches. The proportions of samples thus obtained were 34 percent, 96 percent, and 22 percent in these laboratories (see table 7). We were unable to determine how two of the three laboratories identified and selected samples from relevant stocks in response to the subpoena, but we found that individuals at one laboratory differed in interpreting the subpoena’s instructions. Laboratory officials acknowledged differences in interpreting the instructions on how to identify distinct Ames strains of B. anthracis. Identifying the specific stocks to submit in response to the subpoena at that laboratory was left up to the principal investigator because, at that time, no one else actually working with the stocks would have understood what was in them. FBI officials acknowledged that the interpretation of the instructions to determine what strains to submit to the repository varied across laboratories, stating that the subpoena was not as precise as it needed to be. However, they emphasized that every laboratory that submitted samples to the repository was investigated thoroughly and that, when the FBI conducted searches at the three laboratories, those investigations eliminated many laboratories from being suspects. Furthermore, FBI officials told us that the decision to conduct searches at these three laboratories was an investigative decision, not a scientific one. The NAS report also raised concerns that the decision to remove samples with inconclusive or variant results contributed to the lack of completeness of the repository data. The report stated that a major concern was the restriction of its statistical analyses to the 947 samples that contained no inconclusive or variant results. Notably, the report showed that 4 of the 112 samples that were disregarded for having a single inconclusive or variant result scored positive for the three remaining genetic tests. In addition, our analysis of FBI documents shows that FBI searches contributed to inaccuracies in the repository by collecting samples from stocks that had already been submitted to the repository. We identified 14 duplicate samples from a search conducted at one laboratory in April 2004. FBI officials stated that they were not concerned about duplicate samples in the repository because duplicate samples may have served other important investigative purposes such as verifying if two samples were related or answering other important questions related to investigative information. They also stated that additional information collected about the samples would allow them to reconcile duplicates. However, our analysis of the FBI repository data indicates that known duplicates were not removed from the repository before the statistical analysis. As a result of these examples of incompleteness and inaccuracies in the repository, a statistically meaningful extrapolation of the statistics and frequencies derived from the repository to the relevant source population was not possible. By instituting more rigorous controls over sample identification and collection for future investigations, the FBI can improve the completeness and accuracy of a repository. The results from statistical analyses conducted in 2008 did not adequately account for the mode of inheritance of the genetic markers, and they added little probative value to the investigation. Many of the methods used for the 2008 statistical analyses inappropriately relied on the assumption of independence among the repository samples. For example, the NAS report stated that because the repository samples were not independent, the proportion of samples testing positive for all four genetic markers was not a meaningful estimate of the probability of occurrence. The FBI did not use the results of the statistical analyses and did not quantify the confidence it had in, or the probative value of, the repository results in its conclusions included in its final investigative summary. An FBI official stated that the statistical analyses were viewed from an academic standpoint and were not part of the investigation. That official also stated that the results of the statistical analyses did not contradict the conclusions of the investigation. In its final investigative summary, the FBI concluded that only 8 of more than 1,000 samples tested positive for all four genetic markers, but it did not provide any measure of the confidence it had in this conclusion. We found that the genetic tests show variability in the results on samples selected from the same stock. As we previously indicated in our assessment of the validation of the genetic tests, the additional postvalidation tests conducted in 2007 demonstrated variability in the results of the genetic tests when they were applied to samples under conditions intended to mimic their use on repository samples. Additionally, the two genetic tests for the D marker did not always give the same result for the same sample. An analysis included in the FBI contractor’s Statistical Analysis Report identified 24 repository samples for which the two genetic tests yielded opposite results from the same sample. The NAS report stated that this lack of agreement between the two genetic tests for the D mutation illustrated the differing sensitivities and specificities of the tests. This lack of agreement was also evident in the eight samples that tested positive for all four genetic markers. As shown in figure 4, our analysis of the repository data demonstrated that one of these eight samples also tested negative using the other genetic test for the D marker. Further, our analysis of duplicate samples in the repository showed differences in the results of genetic tests on samples selected from the same stock. As shown in figure 5, only 3 of the 14 duplicate samples we identified showed the same results across the five genetic tests. For example, FBI repository sample number 049-004 tested positive for all five genetic tests while a duplicate sample selected from the same stock (066-044) tested positive for only four of the five genetic tests. In another example, FBI repository sample number 049-016 tested positive for all five genetic tests while the duplicate sample (047-002) tested negative for all five genetic tests. FBI officials stated that these results may have differed for a number of reasons, including uncertainty from the sampling process (sampling error) and uncertainty from the genetic test itself (stochastic error). Each step in the process the FBI used to collect, prepare, and test repository samples could have added uncertainty to the results of the genetic test. As noted previously, before its searches, the FBI relied on laboratory officials to identify and select subsamples of distinct Ames strains for submission to the repository. The NAS report stated that the subpoena’s instructions to laboratories for preparing samples were not precise enough to ensure consistent procedures for producing samples that would be most suitable for later comparisons. For example, the subpoena instructed laboratories to select a representative sample from each stock but did not provide guidance on how many cells or colonies to select. Although steps were taken in the genetic tests to standardize the number of cells being tested, the number of initial cells or colonies selected from each stock would have affected the probability of selecting material capable of producing the genetic markers. This is particularly important because the mutations chosen as genetic markers were infrequent in the evidentiary material. For example, we interviewed the scientist who submitted the duplicate samples we identified above as having opposite results (all five negative versus all five positive). He told us that, in the presence of an FBI investigator, he had not followed the subpoena instructions when he selected the sample (047-002) that tested negative for all five genetic markers. In addition to the selection methods we have discussed in this report, the methods used to prepare and test the repository samples could have introduced uncertainty to the results of the genetic tests. The NAS report stated that replication could have been used in the design of the FBI repository to provide measures of the uncertainty of the genetic tests. Although laboratories were required to submit to the repository two samples from each stock, only one of those samples was tested for the genetic signature. Without replication, the FBI was unable to assess uncertainty in the results of the genetic tests in the context of testing actual repository samples. Because the FBI did not include measures of uncertainty when presenting the results of the genetic testing, questions have been raised about samples that tested positive for three or fewer genetic markers. For example, NAS stated that the FBI did not address false negative results and raised concern regarding the restriction of the statistical analyses to the repository samples that contained no inconclusive or variant results. NAS further highlighted 21 samples that contained an inconclusive or variant result and tested positive for 1, 2, or 3 genetic markers. To illustrate the potential effect this uncertainty could have had on the interpretation of the results, we conducted an analysis using the estimates of false negative rates obtained from the additional replicate testing, combined with a sensitivity analysis accounting for the decision to restrict the statistical analyses to the 947 samples that contained no inconclusive or variant results. We computed a range of probabilities, given the observed results of the genetic testing, that each repository sample was selected from a stock that could have produced all four genetic markers. We found an additional 16 repository samples with probabilities that exceeded a 1 percent chance of being selected from a stock that contained all four genetic markers. We determined that 15 of these 16 additional samples were selected from stocks held at the same two laboratories that were the source of one or more of the 8 samples that tested positive for all four genetic markers. The remaining sample identified in our analysis was a sample that we had determined was independent from RMR-1029 and tested positive for the D marker. In addition, this sample was inconclusive for both the A1 and A3 markers and negative for the E marker. We computed a 0 to 19 percent range of probabilities for this sample, the maximum occurring when the model made the assumption that both inconclusive results for A1 and A3 markers were positive. Additionally, the results of the genetic tests for this sample further highlight the importance of including measures of uncertainty. According to the transfer inventory records we reviewed and the laboratory official we interviewed, this sample was selected from a stock that was one of four copies of the same material. As shown in figure 6, the repository samples selected from the remaining three copies tested negative for all five genetic markers. This demonstrates that the genetic tests could have yielded different results for samples selected from the same material and, as the NAS stated, replication could have been used to provide measures of the uncertainty induced by these varying results. The FBI has taken steps to include statistical expertise in future investigations. The NAS report stated that the FBI appeared not to have sought formal statistical expertise early in this investigation and that similar investigations would benefit from including statistical expertise in their design and implementation. It noted that because many inferences depend on the design and analysis of complex data, the FBI should consult with expert statisticians throughout experimental design and planning, sample collection, sample analysis, and data interpretation. Further, the 2009 NRC report on strengthening forensic science in the United States highlighted the importance of statistical and quantitative proficiency for improving forensic science methods. An FBI official told us that since the 2009 NRC report, the FBI has been building formal forensic statistical expertise both internally and externally. For example, he said that the FBI laboratory division had created an internal statistical working group to examine the FBI’s statistical needs in its forensic methods. The group included a professor of statistics visiting for 6 months to examine the statistical questions related to patterns, such as fingerprints, and also other science, such as chemistry and explosives. Additionally, the FBI has established a working relationship with members of the American Statistical Association’s Ad-Hoc Advisory Committee on Forensic Science in order to discuss its statistical capacity. The FBI has also worked with other agencies to identify areas of statistical research needed for future investigations. After the 2001 attack, the FBI did not conduct a lessons learned study but considers the NAS report to be one. The NAS report identified some scientific gaps related to the development of genetic tests and statistical analyses. In addition, we identified a key scientific gap that is related to the verification and validation of the genetic tests and the statistical analyses—that is, the significance of using genetic mutations in B. anthracis as genetic markers for analyzing evidentiary samples. DHS has funded some research on this gap but this research is not yet complete, and it is not yet known whether it will fully address the gap. The FBI has not conducted a formal lessons-learned study of the scientific and technical methods it used in the investigation and thus has not specifically identified any scientific gaps in research related to the validation of genetic tests and statistical approaches. An FBI official stated that such a study was not needed because the 2001 incident was unique and the case is closed. This FBI official also told us that he considered the NAS report to be the lessons-learned study because it had identified several scientific gaps. For example, the NAS report indicated that the investigation lacked 1. a method for interpreting the genetic similarity between the attack spores in the letters and those in RMR 1029; and 2. an experimental design that included statistical input in the early stages of the investigation. Nevertheless, the FBI does not necessarily agree with the scientific gaps that NAS highlighted in that report. However, the FBI stated in 2010 that the active dynamics of the microbial genome for any given species need to be understood—for example, the location on the genome of “hot spots” for mutation and diversity and whether there is a high rate of genetic mobility and change in any given species. Further, in September 2014, according to an FBI official, technology has changed since the investigation, and in the future genome sequencing will be used to analyze evidence samples. In addition to the gaps identified in the NAS report, we identified a key scientific gap that has not been fully addressed. This gap is related to the significance of using genetic mutations as genetic markers for analyzing evidentiary samples to determine their origins. Recognized by NAS, this issue is associated with the gaps it identified. With respect to verification and validation, the genetic tests targeted specific DNA sequences of certain genetic mutations in their screening of the repository samples. The FBI used the results of the analysis of the repository screening by those tests to narrow the source of the attack spores. However, during the investigation, it was not known how stable genetic mutations were in a microbial genome or how significant they were as genetic markers. We found that conditions causing the rise of the genetic mutations in the evidence were not known before or after validation or during the subsequent statistical analysis of the results of the repository screening. During the investigation, it was not known what conditions would have promoted or inhibited the presence of the genetic mutations at detectable levels. Such knowledge would have indicated whether they were associated with the evidence itself or with the culture practices normally used in a laboratory. Although FBI expert advisers recommended experiments, none were conducted at that time to attempt to obtain this information. Such experiments could have helped in understanding the evolution of these particular genetic mutations. DHS has recognized the need for a methodology to determine how a material has been grown and produced and for obtaining information on the biology of agents, including their mutation rates and genome “hotspots” for mutation, so that their “relatedness” can be measured. In this context, an expert who reviewed this report stated that computational methods are also needed to reconstruct (or assemble) genome sequencing data so that the relationship between markers that are not independent, as is common in asexually reproducing bacterial genomes, can be inferred. As a result, DHS has funded research that is intended to provide a better understanding of how morphological variants, or mutations, could emerge and evolve in bacterial genomes. Some of the technologies involved in DHS’s research, such as whole genome sequencing, are still evolving. DHS-funded research includes studies of the population genetics of bacterial agents, including B. anthracis, at Northern Arizona University This research involves studies of diversity that include mutations (NAU).among these agents. DHS’s NBFAC is also studying genome sequencing methods. The purpose of these studies is to develop the capability to perform a metagenomic analysis of an entire sample using a hybrid- assembly. According to DHS, the field of “metagenomics,” is broad but unified by its focus on a community of genomes rather than individual isolates. Such research is a step in the right direction, since the FBI has indicated that it is likely to use genome sequencing methods in future investigations to analyze evidence. However, since this research is ongoing it is not clear when it will close the gap or whether it can do so alone. Although we identified several aspects of the FBI’s scientific methods we reviewed that could be improved in a future investigation, we recognize that in 2001, the FBI was faced with an unprecedented case. Determining the source of the spores in the envelopes was complicated by many factors, including the uncertain provenance of samples in the FBI repository, an unknown mutation rate for B. anthracis under laboratory growth conditions, and the performance of the genetic tests under “real- world” conditions. The genetic tests were generally verified, validated and demonstrated through the validation testing that they met the FBI’s acceptance criteria, but the lack of a comprehensive approach—that is, a validation framework—allowed for differences in the contractors’ approaches. Further, the results of the postvalidation testing raise questions about whether additional information could have been obtained during verification and validation and, thus, whether the validation testing could have been more rigorous. The use of a standardized approach to verification and validation from the beginning could have more definitively established the performance of all the genetic tests. It could have helped in communicating expectations clearly, ensuring confidence in results generated by any genetic tests developed. DHS could be instrumental in developing a validation framework and future efforts using a framework could help achieve minimum performance standards during verification and validation, particularly under multiple contracts. Also, incorporating statistical analyses in the framework would allow the calculation of statistical confidence for interpreting the validation testing results. The FBI’s statistical approach to its study design and plan, sample collection and analysis, and interpretation of data and scientific evidence lacked several important characteristics that could have strengthened the significance of that evidence. Although the complexity and novelty of the scientific methods at the time of the FBI’s investigation made it challenging for the FBI to adequately address all these problems, the agency could have improved its approach by including formal statistical expertise early in the investigation and establishing a statistical framework that could identify and account for many of the problems. In future investigations, statistical expertise early in the investigation will help identify the importance and role of fully understanding the (1) evolution of the genetic markers, (2) sources of dependence between samples, and (3) uncertainty in the measurement tools used to identify a genetic signature. This expertise could influence an investigation’s methods and strengthen the significance of scientific evidence. A key scientific gap—how stable genetic mutations are in a microbial genome and thus their suitability as genetic markers—remains an issue. Lack of this knowledge has implications for both the development of genetic tests, or other investigative approaches and technologies, and the analysis of the results they generate. For example, how likely it is that the same genetic mutations will arise independently in separate cultures is currently unknown, and so is whether different culture conditions can change the ratio of the mutations significantly enough to provide a negative rather than a positive result. DHS-funded research into the evolutionary behavior of variants in the genome of B. anthracis and other microbial agents and the use of genome sequencing is a step in the right direction because the FBI is planning to use sequencing in future investigations to analyze all the material in evidence samples. However, in determining the significance of using mutations as genetic markers, an understanding is still needed about the stability of genetic mutations. DHS’s ongoing research is likely to take several years and some of the technologies it entails, such as whole genome sequencing, are still evolving. Therefore, it is not clear when and whether this research alone will address this gap. To ensure that a structured approach guides the validation of the FBI’s future microbial forensic tests, we recommend that the Director of the Federal Bureau of Investigation work with the Secretary of Homeland Security to develop a verification and validation framework. The framework should be applied at the outset of an investigation involving an intentional release of B. anthracis, or any other microbial pathogen. It should (1) incorporate specific statistical analyses allowing the calculation of statistical confidence for interpreting the results and specifying the need for any additional testing to fully explore uncertainties relative to the type of genetic test being validated and (2) applied and adapted to a specific scenario and employs multiple contractors. In addition, we recommend that the Director of the FBI establish a general statistical framework that would require input from statistical experts throughout design and planning, sample collection, sample processing, sample analysis, and data interpretation that can applied and adapted to address a specific scenario involving an intentional release of B. anthracis or any other microbial pathogen. We provided a draft of this report to the FBI and DHS for review and comment. The FBI provided written comments, which are reprinted in appendix IV. In its comments, the FBI agreed with our recommendations and stated that it had taken significant steps toward addressing them. In addition, the FBI provided technical comments that we have addressed in the body of our report as appropriate. DHS stated that it had no comments on the draft report. With respect to the first recommendation, the FBI stated that “NBFAC programs have developed analytical capabilities in microbial forensics for numerous biological agents” in “support of investigations of the use or suspected use of biological weapons.” It stated that “these assays are validated and accredited under international standards (ISO17025) . . . .” According to the FBI, these capabilities, and those still being developed, “address part 2” of our recommendation “…applied and adapted to a specific scenario…” in as much as they represent capabilities addressing numerous biological agents and toxins. Further, the FBI stated that the NBFAC is pursuing the most current techniques of microbial genetic analyses and that some of these may soon be accredited. The FBI added that it actively participates in the National Strategy for Countering Biological Threats, under which the agency has helped in “Establishing a National level research and development strategy and investment plan for advancing the field of microbial forensics.” Further, it stated that it is helping to maintain “the National Biological Forensics Analysis Center (NBFAC) as the Nation’s lead Federal facility for forensic analysis of biological material in support of law enforcement investigations,” which advances the field of microbial forensics through scientific workshops sponsored by the FBI. According to the FBI, such workshops have included work on interpreting microbial genetic data acquired by next generation sequencing platforms. The FBI stated that this work has included “statistical analyses of the confidence in base calling” using these platforms and “bioinformatic software.” We recognize the importance of the FBI’s active participation in microbial forensic research and scientific workshops that address key issues related to the performance of emerging microbial forensic tests. We also recognize that establishing the error rates of genome sequencing platforms, which the FBI stated it may use in future investigations, would be an important step in verification and validation. Further, as we state in this report, developing a framework for verification and validation when employing multiple contractors in the same investigation could help standardize the process with minimum performance standards. Thus, we believe that the FBI’s continued work with DHS could help ensure the development of such a framework and improve its approaches to future investigations. A written plan could assist in the development of the framework. With respect to the second recommendation, the FBI stated that scientists from the FBI and NBFAC participate in the Food and Drug Administration’s related efforts, the “Global Microbial Identifier” symposiums, “whose activities include statistical analyses for interpreting microbial genetic data in investigations of food-borne illness.” We recognize the importance of the FBI’s continued participation in research on the statistical interpretation of microbial genetic data. The evidence we present in this report suggests that if statistical expertise had been included early in the FBI’s investigation, it could have improved the significance of the collected microbial forensic evidence. By establishing a general statistical framework, the FBI will be able to provide some assurance that input from statistical experts will be included in future investigations so that they will benefit from statistical expertise. Developing such a framework could also be facilitated by a written plan. We believe that the actions that the FBI states it has taken are a step in the right direction toward addressing our two recommendations. We are sending copies of the report to the FBI and DHS, appropriate congressional committees, and other interested parties. The report is also available at no charge on the GAO website at www.gao.gov. If you or your staff have any questions about this report, please contact Timothy M. Persons, Ph.D. at (202) 512-6412 or [email protected]. Contact points for our Office of Congressional Relations and Office of Public Affairs appear on the last page of this report. Key contributors to the report are listed in appendix V. The scope of our work was limited to a review of the scientific methods employed to validate the genetic tests used to screen the FBI’s repository of Ames B. anthracis samples, the procedures used to identify and collect samples of Ames B. anthracis in the creation of the FBI’s repository, and the statistical analyses and interpretation of the results of the genetic tests. We did not address any other scientific methods or any of the traditional investigative techniques used to support the FBI’s conclusions in this case, and we take no position on the FBI’s conclusions when it closed its investigation in 2010. Our objective for this performance audit was to answer the following questions: 1. To what extent were the genetic assays used to screen the FBI repository of Ames samples scientifically verified and validated? 2. What are the characteristics of an adequate statistical approach for analyzing the repository samples and to what extent was the statistical approach used adequate? If not adequate, how could this approach be improved for future efforts? 3. What remaining scientific concerns and uncertainties, if any, regarding the validation of genetic assays and statistical approaches will need to be addressed in future analyses? What additional research, if any, would be helpful in resolving such scientific uncertainties in any future investigation? To determine the extent to which the genetic tests were verified and validated, we collected and reviewed data regarding (1) the FBI’s requirements for validation, (2) documentation from the FBI’s contractors on their verification and validation testing, and (3) documentation from the FBI on the contractors’ efforts to develop their genetic tests as well as results from the validation testing. We also reviewed related scientific literature and agency and industry standards and guidelines regarding the verification and validation of analytical methods, including real-time PCR- based tests for detecting B. anthracis, among others. We developed criteria for assessing the extent of the validation. We used references from agency standards, reports, and guidelines for validation and from scientific literature to identify the essential phases in an approach, or framework, for developing genetic tests. We compared what the FBI and its contractors had done to verify and validate the genetic tests against these phases and tasks. Specifically, we reviewed the FBI’s and its contractors’ laboratory documentation to determine for each genetic test (1) the steps each took to verify the genetic tests’ performance and conduct the FBI-administered validation, (2) whether the validation test results met the FBI’s acceptance criteria and minimum requirements, and (3) whether the FBI’s postvalidation testing of the genetic tests on the flask RMR-1029 provided further insights into the sensitivity and specificity of the genetic tests beyond those obtained by the validation. We also determined whether the processes the contractors’ laboratories followed for verifying and validating their genetic tests were consistent. Finally, we reviewed the NAS report’s observations on the performance of the genetic tests in screening the FBI’s repository samples. We interviewed officials and scientists at the FBI contractors, the FBI, and elsewhere on how the genetic tests had been verified and validated, standards or guidelines had been applied, and the FBI’s rationale for its requirements and acceptance of the five genetic tests as validated. We also compared the validation test results with the results of the additional testing that was conducted after validation to determine if any additional information was provided on the performance characteristics of the genetic tests. We did not independently verify whether the contractors followed their quality assurance guidelines in developing, verifying, and validating their genetic tests, but we assumed that they did so from the documentation provided. To determine the extent to which the statistical approach used for analyzing the repository samples was adequate, we used three approaches. First, we collected and analyzed documentation from the FBI, the three domestic laboratories searched by the FBI, and the contractor who did the statistical analyses. We reviewed contract records and conducted interviews with the FBI and laboratory officials. We conducted a literature review to collect relevant references from forensic science, statistics, epidemiology, and population genetics. Informed by the relevant literature, we identified and developed the set of characteristics that would be a statistical approach adequate to achieve the stated purposes of the FBI’s statistical analyses. We submitted the set of desirable characteristics described in this report to our experts and a subcommittee of the American Statistical Association’s (ASA) Ad Hoc Advisory Committee on Forensic Science for their review and comment. To obtain information about how samples were selected from stocks and submitted to the repository, we reviewed the FBI subpoena protocols, conducted semi-structured interviews with officials, and collected relevant laboratory documentation from the three laboratories that the FBI searched. Second, to obtain information about samples collected through the three follow-up searches, we interviewed FBI officials and reviewed the agency’s documentation, conducted semi-structured interviews with officials from the three laboratories that the FBI searched, and reviewed relevant laboratory documentation. To identify duplicate samples in the repository, we compared the documentation of samples obtained through the searches to samples submitted through the subpoena process. Third, to demonstrate the impact of the sensitivity of the genetic tests and data trimming assumptions made in the statistical analyses, we analyzed the FBI repository data and estimated false negative rates for each genetic test under repository conditions, using the post-validation results from replicate testing of RMR-1029 and evidentiary material. We conducted sensitivity analyses to examine the impact of data trimming assumptions made in the FBI’s statistical analysis by varying the assumptions made to remove all inconclusive, no-growth, and variant results from the analysis. We computed conditional probabilities that a repository sample was selected from a stock containing all four morphs, given the observed combinations of genetic test results. We combined the probability analysis with the data trimming sensitivity analysis to compute a range of conditional probabilities for each repository sample. We identified the samples that had a maximum conditional probability of greater than 1 percent (nontrivial). To assess the reliability of the FBI repository data, we summarized the data and compared the results to the contractor’s final report on the statistical analysis and to published reports by the FBI and the National Academies to ensure external validity of the data. From the results of this testing, we found the data to be sufficiently reliable for the purposes of our review. To determine any remaining scientific concerns and uncertainties regarding the validation of the genetic tests and statistical approaches that would need to be addressed in future analyses, we reviewed relevant federal agencies’ and their contractors’ documents, published literature, and industry documentation on the validation of polymerase chain- reaction based tests, such as those for detecting rare variants, and related scientific concerns and uncertainties that could affect a future investigation. We reviewed the contractors’ final reports on the statistical analysis, reviewed contract documents, and interviewed FBI officials to identify where improvements to the approach could be made. In addition, we reviewed the Centers for Disease Control and Prevention’s (CDC), the Animal and Plant Health Inspection Service’s (APHIS), and the Department of Defense’s (DOD) select agent requirements for storing, handling, shipping, and maintaining inventory controls. We interviewed agency officials to determine if gaps exist in documenting important information about the provenance of B. anthracis stocks. Further, to identify scientific concerns arising during the FBI’s investigation of the validation of the genetic tests and statistical approaches, we reviewed pertinent documentation on scientific issues or problems the FBI and NAS had identified and their effect on the FBI’s ability to validate the genetic tests or develop appropriate statistical approaches. Assisted by experts, we determined which gaps were significant and their potential effect on a future investigation with a similar scenario. We also interviewed officials and scientists at the contractors, the FBI, DHS, the National Bioforensic Analysis Center (NBFAC), DOD (at Dugway and USAMRIID), the Department of Energy’s (DOE) Lawrence Livermore National Laboratory (LLNL), the Joint Genome Institute (JGI), EurekaGenomics, and the Executive Office of the President’s Office of Science and Technology Policy (OSTP), regarding scientific challenges to genetic test validation, statistical analyses of the repository data, scientific gaps related to the FBI’s investigation, and any federal research being conducted, or planned, to fill those gaps. To determine additional research that would be helpful in resolving such scientific uncertainties in any future investigation, we reviewed documentation on research DHS is conducting to address any scientific gaps we found related to the validation of the genetic assays and issues related to the statistical analyses of the results of the repository screening. We reviewed the identified gaps and DHS’s research and determined the progress that had been made to close them. Further, following on interviews with scientists and agency officials, and input by our experts, we determined whether any additional research is needed. We asked scientists with expertise in public health and microbial forensic investigations to review and comment on a draft of our report. They included Jim Bristow, M.D., Deputy Director for Scientific Programs, DOE Joint Genome Institute; Karin S. Dorman, Associate Professor, Departments of Statistics and Genetics, Development, and Cell Biology, Iowa State University; George V. Ludwig, Ph.D., Deputy Principal Assistant for Research and Technology, U.S. Army Medical Research and Materiel Command; Jack Melling Ph.D., Director (retired), U.K. Centre for Applied Microbiology and Research, Porton Down, U.K.; Jeff Mohr, Ph.D., Chief (retired), Life Sciences Division, U.S. Army, Dugway Proving Grounds; and Stephen Velsko, Ph.D., Senior Scientist and Associate Program Leader, Lawrence Livermore National Laboratory. Finally, we asked a subcommittee of the American Statistical Association’s (ASA) Ad-hoc Advisory Committee on Forensic Science for its review and comment on the statistical aspects of a draft of our report. The subcommittee provided us with detailed comments that expressed general agreement with the statistical aspects of the draft, suggested changes to terminology related to the frequency with which microbial properties are present in a population, and suggested appropriate caveats and limitations to analyses we conducted. We incorporated these comments as appropriate throughout the report. We conducted this performance audit from January 2013 to November 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Performance Parameters Evaluated by Genetic Test n.a. n.a. n.a. n.a. n.a. Legend:  = evaluated; ο = not evaluated; n.a. = not applicable for qualitative tests. To illustrate the potential effect of the sensitivity of the genetic tests and data trimming assumptions made in the statistical analyses, we analyzed the FBI repository data and estimated false negative rates for each assay under repository conditions using the results from postvalidation replicate testing of RMR-1029 and evidentiary material. We conducted a sensitivity analysis to examine the effect of data trimming assumptions in the FBI’s statistical analysis by varying the assumptions to remove all inconclusive, no-growth, and variant results. We computed the conditional probabilities that a repository sample was selected from a stock containing all four genetic markers, given the observed combinations of results. We combined the probability analysis with the data trimming sensitivity analysis to compute a range of conditional probabilities for each repository sample. We then identified the samples that had a maximum conditional probability of greater than 1 percent (nontrivial). To build a model to compute this probability, we defined the sample space of possible outcomes. There are 16 combinations for a binary measure of the presence (+) or absence (-) of each of the four genetic markers. Therefore we defined the possible outcomes for the four genetic markers (A1, A3, D, and E) as S1 through S16, as shown in figure 7. The observed assay results have the 16 possible outcomes listed in figure 7. Since the goal of this analysis was to compute the probability that a repository sample had been selected from a stock that contained all four genetic markers, given the observed test result, we are interested in the probability of S, given the observed test result for a repository sample, 𝑃(𝑆1 |𝑜𝑏𝑠). Using Bayes’ theorem, this can be written as a , where posterior probability, (𝑆1|𝑜𝑏𝑠)= 𝑃(𝑜𝑏𝑠|𝑆1)𝑃(𝑆1) 𝑃(𝑜𝑏𝑠|𝑆𝑖)𝑃(𝑆𝑖) 𝑃(𝑜𝑏𝑠|𝑆𝑖) 𝑃(𝑜𝑏𝑠|𝑆1)=𝑃𝐴1(𝑜𝑏𝑠𝐴1|+)∗𝑃𝐴3(𝑜𝑏𝑠𝐴3|+)𝑃𝐷(𝑜𝑏𝑠𝐷|+)∗𝑃𝐸(𝑜𝑏𝑠𝐸|+) We used statistics derived from the results of postvalidation replicate testing conducted on RMR-1029 and letter material to estimate false negative rates. Figure 8 shows the breakdown of the results of the replicate testing. The sensitivity analysis examined the effect of two data trimming decisions made in the FBI’s statistical analysis of the repository samples─the choice of D assay results and the treatment of inconclusive results. The D marker was typed by two assay procedures (D-1 and D-2), only one of which (D-2) the FBI used in its analysis. The Statistical Analysis Report was restricted to the analysis of 947 samples that contained no inconclusive or variant results and, therefore, excluded 112 samples. To explore the potential effect of the inconclusive exclusion on the probabilities of observing all four morphs, we explored three possible outcomes for inconclusive results. We treated all inconclusive results as first positive and then negative, and then we excluded the inconclusive results from the analysis. The sensitivity analysis examined the six different combinations of outcomes, the two D assay possibilities, and the three potential outcomes of the inconclusive data. The computation included all 1,059 repository samples and varied the assumptions made around data trimming from most to least conservative. The results for each set of estimated false negative rates show that 7 of the 16 possible outcomes of the genetic testing had a range of probabilities that included values exceeding a 1 percent chance of being selected from a stock that contained all four genetic markers (table 8). Further, when we computed the probabilities for the repository samples, we found that only a small subset of the 1,059 repository samples had a range of probabilities that included values that exceeded a 1 percent chance of being selected from a stock that contained all four genetic markers. Specifically, we identified 24 repository samples, including the 8 that tested positive for all four genetic markers, which had a nontrivial chance of being selected from a stock that contained all four genetic markers. By using estimates of false negative rates from the results of the postvalidation replicate tests on RMR-1029 and the letter material, we have made an assumption that the genetic variants in all samples in the FBI repository were at least as concentrated as in RMR-1029 or the letter material. Additionally, since these replicate samples were selected in a controlled environment, false negative rates may have been underestimated because they are not affected by variation in test results caused by the sampling procedures used to submit samples to the repository. These assumptions contribute to a conservative estimate of the probability of a source matching all four genetic markers. Timothy M. Persons (Chief Scientist), (202) 512-6412 or [email protected]. In addition to the contact named above, Sushil Sharma, Assistant Director, Pille Anvelt, James Ashley, Hazel Bailey, Amy Bowser, Mae Liles, Jan Montgomery, Penny Pickett, and Elaine Vaurio also made key contributions to this report. Anthrax: DHS Faces Challenges in Validating Methods for Sample Collection and Analysis GAO-12-488 (Washington D.C.: July 31, 2012). Federal Agencies Have Taken Some Steps to Validate Sampling Methods and to Develop a Next-Generation Anthrax Vaccine, GAO-06-756T (Washington D.C.: May 9, 2006). Anthrax Detection: Agencies Need to Validate Sampling Activities in Order to Increase Confidence in Negative Results, GAO-05-251 (Washington D.C.: March 31, 2005). U.S. Postal Service: Better Guidance Is Needed to Ensure an Appropriate Response to Anthrax Contamination, GAO-04-239 (Washington D.C.: September 9, 2004). U.S. Postal Service: Issues Associated with Anthrax Testing at the Wallingford Facility, GAO-03-787T (Washington D.C.: May 19, 2003). U.S. Postal Service: Better Guidance Is Needed to Improve Communication Should Anthrax Contamination Occur in the Future, GAO-03-316 (Washington D.C.: April 7, 2003).
In 2001, the FBI investigated an intentional release of B. anthracis , a bacterium that causes anthrax, which was identified as the Ames strain. Subsequently, FBI contractors developed and validated several genetic tests to analyze B. anthracis samples for the presence of certain genetic mutations. The FBI had previously collected and maintained these samples in a repository. GAO was asked to review the FBI's genetic test development process and statistical analyses. This report addresses (1) the extent to which these genetic tests were scientifically verified and validated; (2) the characteristics of an adequate statistical approach for analyzing samples, whether the approach used was adequate, and how it could be improved for future efforts; and (3) whether any remaining scientific concerns regarding the validation of genetic tests and statistical approaches need to be addressed for future analyses. GAO reviewed agency and contractor documentation, conducted literature reviews, and conducted statistical analyses of the repository data. GAO's review focused solely on two aspects of the FBI's scientific evidence: the validation of the genetic tests and the statistical approach for the analyses of the results. GAO did not review and is not taking a position on the conclusions the FBI reached when it closed its investigation in 2010. After the 2001 Anthrax attacks, the genetic tests that were conducted by the Federal Bureau of Investigation's (FBI) four contractors were generally scientifically verified and validated, and met the FBI's criteria. However, GAO found that the FBI lacked a comprehensive approach—or framework—that could have ensured standardization of the testing process. As a result, each of the contractors developed their tests differently, and one contractor did not conduct verification testing, a key step in determining whether a test will meet a user's requirements, such as for sensitivity or accuracy. Also, GAO found that the contractors did not develop the level of statistical confidence for interpreting the testing results for the validation tests they performed. Responses to future incidents could be improved by using a standardized framework for achieving minimum performance standards during verification and validation, and by incorporating statistical analyses when interpreting validation testing results. GAO identified six characteristics of a statistical framework that can be applied for analyzing scientific evidence. When GAO compared the approach the FBI used to this framework, it found that that the FBI's approach could have been improved in three of six areas. First, the FBI's research did not provide a full understanding of the methods and conditions that give rise to genetic mutations used to differentiate between samples of B. anthracis . Second, the FBI did not institute rigorous controls over the sampling procedures it used to build the repository of B. anthracis samples. Third, the FBI did not include measures of uncertainty to strengthen the interpretation of the scientific evidence. GAO found that since 2001 the FBI has taken some steps to build formal forensic statistical expertise. The FBI's approach to future incidents could benefit from including such expertise early in an investigation. The lack of an understanding of how bacteria change (mutate) in their natural environment and in a laboratory is a key scientific gap that remains and could affect testing conducted in future incidents. Specifically, the significance of using such mutations as genetic markers for analyzing evidentiary samples to determine their origins is not clear. This gap affects both the development of genetic tests targeting such mutations and statistical analyses of the results of their use on evidentiary samples. The Department of Homeland Security is currently funding some research on genetic changes in bacteria and genome sequencing methods, among others. Such research is a step in the right direction since the FBI is planning to use genome sequencing methods in future investigations. However, because this research may not be complete for several more years, the extent to which it will close this gap is not known. GAO recommends that the FBI develop a framework for validation and statistical approaches for future investigations. The FBI agreed with our recommendations.
The Federal Employees’ Retirement System Act of 1986 (FERSA) created the Thrift Savings Plan (TSP), a retirement savings plan similar to private- sector 401(k) plans, as a key component of the Federal Employees’ Retirement System (FERS) for federal workers. Membership is open to federal and postal employees, members of Congress and their staff, members of the uniformed services, and members of the judicial branch. Participants are eligible for deferred federal (and in certain cases, state) income taxes on employee contributions and earnings. For employees covered by FERS, agencies make contributions to employees’ TSP accounts. Agencies automatically contribute 1 percent of an employee’s salary during each pay period to the TSP. Agency contributions for any employee who remains employed by the federal government until “vested” become part of that employee’s retirement savings. Some employees are vested after 2 years of service; all other employees are vested at the end of 3 years of service. Agencies also match FERS employee contributions to the TSP up to a total of 5 percent of the employee’s basic pay. As of November 2006, FRTIB managed about $200 billion in assets for 3.5 million participants and beneficiaries. FRTIB—which administers the TSP—is an independent agency in the executive branch. As of the close of fiscal year 2006, FRTIB employed about 65 people. Some functions—such as enrollment and training of participants—are handled by other federal agencies rather than FRTIB. These other agencies’ payroll and personnel offices act as the points of contact for TSP participants; these offices may assist with enrollment and alteration of contribution percentages. Additionally, the Office of Personnel Management has established a training program for retirement counselors of federal agencies. FRTIB is overseen by five Presidentially appointed board members and is charged with establishing policies for the investment and management of TSP funds and with creating administrative policies for the TSP. FRTIB’s administrative expenses are funded by (1) forfeited agency contributions and (2) assessments against net earnings of the Thrift Savings Fund. The 1 percent automatic agency contribution for FERS employees who leave before vesting are forfeited to the TSP. To finance the remainder of the FRTIB’s administrative expenses, FRTIB assesses fees against the net earnings of the Thrift Savings Fund. FRTIB is required by law to use the forfeited funds before assessing fees against the net earnings of the Thrift Savings Fund. In both 2004 and 2005 these forfeited agency contributions covered about 13 to15 percent of administrative expenses; assessments against net earnings of the Thrift Savings Fund were set at a level to cover the remaining 85 to 87 percent. Outside contractors manage all investment funds other than the G fund. FRTIB has contracted with Barclays Global Investors (Barclays) to manage the F, C, S, and I funds. The L funds—which were introduced in August 2005—were designed by another private company, Mercer Investment Consulting. Consistent with generally accepted accounting principles, FRTIB’s financial statements list these investment expenses as adjustments to investment income; they are not included in the line item for administrative expenses. The cost to participate in a retirement fund is measured as an expense ratio of the total administrative expenses charged to a fund during a specific time period, divided by that fund’s average balance for that specific time period. According to a 2005/2006 Deloitte Consulting 401(k) benchmarking survey, the average expense ratio for 401(k) plans was 75 basis points. In comparison, FRTIB charged participants only 4 basis points in fiscal year 2006. According to FRTIB, it reviews its administrative expenses regularly through a variety of means. First, an independent entity audits administrative expenses as part of a larger financial audit of FRTIB. Second, the Department of Labor examines administrative expenses as part of its periodic FRTIB Administrative Staff review, which it conducts approximately once every 3 years. Third, as required by statute, board members prepare and submit to the President and Congress an annual budget. Fourth, each month since 2003, FRTIB has provided monthly reports to the Executive Director for review. Lastly, according to FRTIB, it contracts for all major TSP administrative services. Because the maximum contract length is 5 years, every major activity undergoes review and competition at a minimum of every 5 years. FRTIB’s administrative expenses ranged from a peak of $101 million in fiscal year 2000 to $83 million estimated for fiscal year 2006. During this time period, only the fiscal year 2001 administrative expenses were lower than the 2006 expenses, and that reflected the termination of a contract with American Management Systems to develop a record-keeping system, which will be discussed in more detail below. However, in real terms, FRTIB’s administrative expenses in 2006 were at a 7-year low. (See fig. 1 below). There is no standard governmentwide definition of administrative expenses. For the purposes of this review, and consistent with how FRTIB presented its budget to the board for approval, we considered all expenses other than investment expenses to be administrative. FRTIB purchases most of its administrative services from outside entities. Until 2005 these were purchased primarily from government agencies—99 percent of which were record-keeping and call center services provided by the National Finance Center. During fiscal years 2005 and 2006 the National Finance Center terminated its contracts with FRTIB, which began purchasing these services from a private contractor. The only remaining major services purchased from another government agency are personnel and payroll services provided by the U.S. Department of the Interior. For its administrative activities, FRTIB has established practices that are consistent with federal regulations. However, it has opportunities to achieve cost savings. To purchase services and goods, FRTIB is required to follow the Federal Acquisition Regulation (FAR) and employee compensation is governed by federal pay schedules. For travel, FRTIB is bound by federal travel regulations. However, this is one place where opportunities may exist to reduce travel expenses by holding monthly board member meetings by teleconference where appropriate. FRTIB leases its own space directly rather than going through GSA. Although the cost per square foot for its headquarters is comparable to average rates that GSA cites for nearby available properties, the amount of space FRTIB rents is greater per employee than GSA recommends. The amount that FRTIB spends purchasing services from other entities is at a 7-year low. (See fig. 4 below.) Since most administrative services are purchased from outside entities, it is not surprising that declines in spending on services purchased from outside entities—both private contractors and other federal agencies— parallel the declines in administrative expenses during the same period. Although most administrative services are purchased from outside entities, the allocation between private contractors and federal agencies has changed over time. As figure 3 shows, from fiscal year 2000 through fiscal year 2004 FRTIB spent more money purchasing services from other government agencies than from private contractors. In shifting the acquisition of call center services from the National Finance Center to a private contractor in 2005, the balance was reversed. In 2006, the National Finance Center terminated all remaining services that it was providing to FRTIB. With the termination of the National Finance Center contracts, the only remaining major services purchased from another government agency were payroll and personnel services provided by the U.S. Department of the Interior. Another change within FRTIB’s expenses for services purchased from private contractors occurred in 2001. In 2001, FRTIB terminated a contract with American Management Systems, Inc., a private sector firm hired to design, develop, and implement a record-keeping system for the TSP that would provide daily investment updates. FRTIB then hired Materials, Communication & Computers, Inc. to complete the work that American Management Systems was unable to complete. Although we did not review the reasonableness of FRTIB’s individual contracts, we reviewed its process for acquiring goods and services. Overall, it follows a process that seeks to assure reasonable expenses. First, FRTIB is subject to the FAR—which governs acquisition activities. Second, FRTIB has one contract specialist and one purchasing agent on staff to ensure that acquisition occurs according to statute and regulation. Additionally, other staff are responsible for management and oversight of the individual contracts. For example, the agency’s Chief Information Officer is responsible for managing a record-keeping contract and the Director of the Office of Participant Services is responsible for managing the two call center contracts. Each of these individuals monitors contracts through site visits and remotely. FRTIB plans to send agency directors for additional monitoring site visits in 2007. The travel expenses for these monitoring visits are included under travel and discussed later in this report. The Department of Labor provides additional oversight of FRTIB’s contracts through its periodic audits. In 2005, the Department of Labor reported that FRTIB had reexamined service providers to find ways to increase services while decreasing costs. For example, FRTIB found that it could cut expenses in half by transferring information technology operations from the National Finance Center to SI International, a private contractor. The Department of Labor reviewed one call center in August 2006 and, according to FRTIB, the department is also planning to review another call center service contract in 2007. In the years since 2003, a variety of factors have led to increased miscellaneous expenses. (See fig. 5 below.) In fiscal year 2004, miscellaneous expenses jumped to about $14 million, in part because FRTIB updated aging software and hardware. Second, in fiscal year 2005 FRTIB’s printing expenses increased from about $1 million to about $11 million. FRTIB told us that this was a one-time extra expense to print information about a series of new TSP funds, the Lifecycle (L) funds. Lastly, in fiscal year 2006, FRTIB projected a ninefold increase from about $1 million to about $9 million in expenses for communications. This was to pay for providing information to TSP plan participants about new passwords and account numbers to replace the use of Social Security numbers in the system. FRTIB also said that this spike would permit increased communications about the L funds. Although overall miscellaneous expenses began to decrease in 2006, it is unclear if expenses will continue to decrease or remain elevated. Although we did not review FRTIB’s individual contracts, it is required to follow the same regulations as other federal agencies regarding acquisition. As discussed above, FRTIB is required to follow the FAR and is routinely audited by the Department of Labor. Also, FRTIB has the ability to purchase goods through GSA. Although GSA does not guarantee the lowest price possible, use of GSA’s Federal Acquisition Service ensures that the prices paid by FRTIB are reasonable and generally consistent with the prices paid by other agencies. FRTIB’s compensation expenses have been relatively stable over the past 7 years. (See fig. 6 below.) FRTIB compensates employees according to federal pay schedules. Currently, FRTIB employs about 65 staff members, the majority of whom are paid according to the General Schedule. About 28 percent of these employees are compensated at or below the GS-11 level. Accordingly, to the extent that the composition of FRTIB’s staff is appropriate, compensation costs appear reasonable. The Executive Director is compensated according to level three of the Executive Schedule. Additionally, 7 staff members are part of the Senior Executive Service. These Senior Executive Service positions are all approved by the Office of Personnel Management. Each of the 5 board members—who are not otherwise officers or employees of the federal government—are compensated at the daily rate of basic pay for level IV of the Executive Schedule for each day the board member is engaged in performing a function for FRTIB. FRTIB’s total rent expenses remained relatively constant until fiscal year 2005. (See fig. 7 below.) Rent expenses dropped in 2005 and increased again in 2006 coinciding with the lease of an emergency facility. The majority of FRTIB’s rental expenses are associated with renting a headquarters office in Washington, D.C. FRTIB rents its own office space, which means it does not need to go through GSA. However, GSA rents space on behalf of many federal agencies and thus has a rich database of local rent prices. Accordingly, we compared the amount that FRTIB pays per square foot in fiscal year 2007 with the average rates that GSA cited for nearby available properties. At $28 per square foot in 2007, the per square foot rental rate—which includes operating expenses such as utilities, building security, maintenance, and cleaning—is in line with average market rates for nearby available properties with comparable square footage. Currently, FRTIB rents more than 47,000 square feet. At its present staff size, FRTIB rents more space per person than GSA would recommend. Based on FRTIB’s mission, a GSA official told us that FRTIB’s space needs are likely similar to a model that proposes 368 rentable square feet per person. At its current staffing level of 65 employees, FRTIB’s headquarters provides more than 670 square feet per person, about 300 square feet more per person than GSA would recommend. This calculation, however, is somewhat misleading because of the recent personnel downsizing at FRTIB. In the 7 years covered by our review, employment at FRTIB peaked at 111 staff members. Yet, even at its peak staffing, FRTIB rented about 430 square feet per person, about 60 square feet more per person than GSA would recommend. In light of recent declines in staff numbers, and consistent with good management practices, at a September 2006 board meeting FRTIB staff indicated that they would look into consolidating space at the headquarters location. Over the past 7 fiscal years, travel expenses varied considerably, ranging in current year dollars from a low of about $80,000 in fiscal year 2001 to a high of about $255,000 in fiscal year 2003. In fiscal year 2006 travel expenses fell slightly above the middle of this range at about $200,000. (See fig. 8.) FRTIB travel is governed by federal travel regulations. The fiscal year 2006 travel we examined was consistent with these regulations and appears reasonable given federal daily allowances for lodging, meals, and incidental expenses and current cost trends for common carriers. The largest portion of fiscal year 2006 travel was for contract oversight, about 38 percent of which was associated with the one-time expense of transitioning the call center from the National Finance Center to a new contractor, SI International. Although about 20 percent of FRTIB’s trips were to other federal agencies to conduct training and make presentations, since the host agency often paid for these trips, they accounted for only about 5 percent of travel expenses. In fiscal year 2006, FRTIB spent about $27,000 to bring board members to Washington, D.C. These expenses are associated with travel from locations as disparate as Alaska and New York, and require, in general, a 2- night stay in Washington, D.C. Although FRTIB reports that it has not issued any first or business class tickets to any employee, it has on occasion issued a first or business class ticket to a board member traveling from Alaska when coach fares were not available. Although the law requires monthly meetings, and the practice has generally been to meet in person, on occasion members of the board have participated in these meetings by telephone—some more than others. Meetings that occur through telecommunications rather than in person save money for FRTIB and sometimes may be appropriate. FRTIB’s current method of benchmarking its costs by comparing the fees assessed to TSP participants against the fees assessed to participants in private 401(k) plans provides important but incomplete information about its administrative costs and efficiency. This measure is very important for participants—and the TSP compares favorably on this measure. It does not, however, provide a complete picture of administrative expenses or sufficient information for oversight of administrative activities. Looking only at an aggregate measure—whether the investment fee or the total administrative costs—provides insufficient information to judge whether individual activities are being conducted either to achieve the best performance or in the most efficient manner. Disaggregating FRTIB’s activities and benchmarking those individual activities against similar ones elsewhere would provide the board a better picture of the performance and efficiency of these activities. Although no other federal agency performs the same mission as FRTIB, the individual activities it performs to fulfill that mission can be found in other agencies and outside government. For example, other agencies—such as the Centers for Disease Control and Prevention and GSA—purchase call center services from private contractors. Benchmarking by individual activity permits an organization to compare the performance of its individual processes/activities—and the way those processes/activities are conducted—with either standards or “best in class” in a specific activity. Benchmarking provides a feedback mechanism for continuous improvement. FRTIB’s falling administrative expenses appear to reflect an overall commitment to manage the TSP in the interest of the participants and beneficiaries of TSP. Consistent with this commitment FRTIB is looking into consolidating its office space. We also note that on occasion board members have elected to participate in the monthly meetings by telephone. The use of telecommunications has increased throughout both government and the private sector. Although in-person meetings are valuable and may be important for some discussions, increasing the use of telecommunications for monthly meetings could further reduce expenses. For example, if the board found it appropriate to meet monthly by teleconference and quarterly in person, travel costs would be reduced. Since FRTIB follows practices that seek to constrain expenses within federally regulated parameters, its success in maintaining low expenses is not surprising. In fact, FRTIB’s fiscal year 2006 administrative expenses were near a 7-year low. Moving beyond comparing costs charged to TSP participants with costs charged by other 401(k) plans to benchmarking the cost and performance of individual activities would be consistent with a commitment to continuous improvement and being alert to opportunities to further improve performance and/or reduce costs. It would also assist the board as it seeks to assure that the TSP is managed in the interest of its participants and beneficiaries. To ensure that FRTIB continues to operate as efficiently as possible, we recommend that the board direct the Executive Director to continue monitoring both the square footage and cost per square foot of office space in the future to ensure that it is appropriate for its needs. In addition, FRTIB should consider expanding its use of telecommunications for its monthly board members’ meetings, as appropriate. To provide the board with the most complete and relevant information with which to assess the expenses and performance of administrative functions of FRTIB, we recommend that the board direct the Executive Director to move beyond the comparison of participant costs against those for other 401(k) plans to benchmarking cost and performance of individual activities—either against similar activities elsewhere or against validated criteria or standards, such as federal regulations. In comments on a draft of this report, FRTIB partially concurred with our findings and recommendations. While they agreed in concept with our recommendations, they stated that in-person board meetings are an extremely important aspect of fulfilling their fiduciary responsibilities and that the minimal travel costs associated with this decision were far outweighed by the benefits derived from in-person meetings. However, consistent with FRTIB’s commitment to managing the TSP in the interest of participants and beneficiaries of TSP, we note that use of telecommunications offers opportunities to further reduce expenses. Our report acknowledges that in-person meetings are valuable and may be important for some discussions. Our recommendation merely states that FRTIB should consider expanding its use of telecommunications as appropriate. With respect to its square footage, FRTIB said it would continue to monitor and assess its office space needs and related costs in relation to projected staffing levels. FRTIB also concurred in concept with the use of benchmarking costs in appropriate situations. However, it noted that rather than separately benchmarking each component of its overall costs, it relies on the competitive procurement process to obtain outstanding performance for TSP participants at competitive rates. FRTIB’s comments reflect a misunderstanding of our point. The report does not suggest that benchmarking would lead to changes in contracts during the contract period. Nor does our report suggest that the competitive bidding process be abandoned—we concur that it helps assure good performance at competitive costs. Nor do we dispute that FRTIB’s processes have led to outstanding performance for TSP participants at competitive costs. To the contrary, companies and agencies that are viewed as leaders in their operations use benchmarking. Benchmarking is widely accepted in the private sector and in much of the public sector as a “best practice” in evaluating performance of specific activities. It offers a way to compare a specific function or activity to the same or similar one in other businesses or agencies. The essence of benchmarking is the process of identifying the highest standards of excellence for products, services, or processes, and then making the improvement necessary to reach those standards—commonly known as best practices. We have developed a body of work—best practice reviews—that provides guidance to help public sector organizations become world-class. Such benchmarking of best practices could be helpful to FRTIB. For example, as the term of a contract for call center operations nears completion and FRTIB considers the design of the successor contract, it could look at the scope or services, performance measures used and performance attained, and costs of other excellent call centers in developing the criteria for that next contract. In addition to these key comments, FRTIB provided technical comments. The full written comments from FRTIB are included and addressed in appendix II. We have incorporated changes as a result of these comments, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to the Executive Director of FRTIB and interested congressional committees. This report will also be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me on (202) 512-9142 if you or your staff have any questions about this report. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and staff acknowledgments are listed in appendix III. To identify the administrative expenses of the Federal Retirement Thrift Investment Board (FRTIB) we reviewed the President’s budget, FRTIB’s audited financial statements, FRTIB’s budget documents from meeting notes of the board members, and FRTIB’s written responses to our questions. To analyze the components of administrative expenses, we used the projections contained in the board members’ annual budget documents. The numbers we used from FRTIB’s budget documents were prepared late in the fiscal year, which ends September 30. FRTIB officials confirmed that the projections were analogous to what appears in budget documents for other agencies as “actuals.” We reviewed FRTIB’s audited financial statements for information about FRTIB’s financial contractual commitments. The administrative expenses listed in the financial statements were not disaggregated sufficiently for our purposes. As a result, they did not provide the detail that we needed for our analysis. To be consistent with the budget documents, we used current year dollars throughout the report. We confirmed that the analysis and conclusions would not change if dollars were adjusted for inflation. To judge whether the administrative expenses of FRTIB are the result of practices consistent with federal regulations, we identified the regulations that guide FRTIB’s expenses for activities such as compensating employees. To analyze the applicability of the Federal Acquisition Regulation (FAR) to FRTIB’s acquisition activities, we reviewed statutory requirements and court cases. We also reviewed GAO guidance for assessing the acquisition function at federal agencies. To compare the rent that FRTIB pays for its headquarters office with the amount that other federal agencies would pay for downtown office spaces we reviewed FRTIB’s current lease. We then compared the parameters of the lease with a database of properties from the General Services Administration (GSA). Because GSA rents office space for other agencies, it has access to a rich database of available properties and current rents. To review the compensation of FRTIB staff members, we analyzed the Office of Personnel and Management’s Central Personnel Data File, a file of all personnel actions in the federal government. This allowed us to identify the pay plans that FRTIB uses to compensate employees, the positions held by FRTIB staff, as well as actual staffing levels for the time period covered by our analysis. To analyze travel expenses, we compared the travel records that FRTIB gave to us for fiscal year 2006 to expected travel expenses for locations given standard per diem rates and negotiated air fares. We also reviewed FRTIB’s responses to questions we submitted. To analyze FRTIB’s current benchmarking practices, we reviewed a benchmarking study cited by FRTIB, notes from the board members’ meetings, relevant GAO work, and FRTIB responses to questions we submitted. We conducted our work in Washington, D.C., between October 2006 and May 2007 in accordance with generally accepted government auditing standards. 1. Our report states that FRTIB leases its own space directly rather than going through GSA—we did not indicate nor imply that FRTIB is required to use GSA for leasing services. 2. Our report acknowledges that in-person meetings are valuable and may be important for some discussions. However, consistent with FRTIB’s commitment to managing the TSP in the interest of participants and beneficiaries of TSP, we note that use of telecommunications offers opportunities to further reduce expenses. Our recommendation states merely that FRTIB should consider expanding its use of telecommunications as appropriate. 3. The FRTIB’s comments on our benchmarking recommendation reflect a misunderstanding of our point. Our report notes that benchmarking should go beyond a comparison of TSP’s investment fees (cost to participants) with those of other 401(k) plans. It does not suggest that benchmarking would lead to changes in contracts during the contract period. Nor does our report suggest that the competitive bidding process be abandoned—we concur that it helps assure good performance at competitive costs. Further, we do not dispute that the FRTIB’s processes have led to outstanding performance for TSP participants at competitive costs. Nor should the mention of call centers as one example of an activity in which other agencies also engage be read as a criticism of FRTIB’s call center operations. To the contrary, companies and agencies that are viewed as leaders in their operations use benchmarking. Benchmarking is widely accepted in the private sector and in much of the public sector as a “best practice” in evaluating performance of specific activities. It offers a way to compare a specific function or activity to the same or similar one in other businesses or agencies. The essence of benchmarking is the process of identifying the highest standards of excellence for products, services, or processes, and then making the improvement necessary to reach those standards— commonly known as best practices. We have developed a body of work—best practice reviews—that provides guidance to help public sector organizations become world-class. Such benchmarking of best practices could be helpful to FRTIB. For example, as the term of a contract for call center operations nears completion and the FRTIB considers the design of the successor contract, it could look at the scope or services, performance measures used and performance attained, and costs of other excellent call centers in developing the criteria for that next contract. 4. Our report states that looking only at investment fees offers an incomplete picture of administrative expenses and that looking at total administrative expenses in the aggregate provides incomplete information for judging whether individual activities are being conducted in the most efficient matter. We did not discuss or opine on FRTIB’s disclosure of administrative expenses. 5. Our report notes that benchmarking should go beyond a comparison of TSP’s investment fees (cost to participants) with those of other 401(k) plans. It does not suggest that it is FRTIB’s responsibility to ensure that private sector 401(k) plans adopt similar levels of public disclosure. 6. See comment 3. 7. We agree that telephone services for the call centers are mission- related services. However, because budget data provided by the FRTIB aggregates such expenses with charges such as postage and other miscellaneous expenses, we were unable to separate these expenses from others in this category. Nonetheless, we have clarified the report to indicate that expenses associated with telephone service for call centers, although mission-related, were included under miscellaneous expenses. 8. We revised the report text as suggested. 9. We revised the report text as suggested. 10. We revised the report text as suggested. 11. We disagree with FRTIB’s comment that it is not subject to the federal procurement rules. First, FRTIB is an executive agency under 41 U.S.C. § 405(a) and thus subject to the FAR. Moreover, we are unaware of any, and FRTIB has not identified any, express exclusion for FRTIB. Second, FRTIB cites as support a 1987 internal memo that states “One of the most important criteria applied by the courts and agencies, in determining the applicability of acquisition regulations is the source of funds being expended.” The memo concludes that FRTIB does not pay its administrative expenses with appropriated funds. In a 2002 contract dispute with one of its contractors, the United States Court of Federal Claims rejected each one of FRTIB’s arguments that its administrative expenses are not payable out of appropriated funds. Since the FAR applies to acquisitions by contract with appropriated funds, and the FRTIB has not addressed the court’s ruling, we stand by the position that FRTIB is subject to the FAR. 12. We revised the report text as suggested. 13. We revised the report text as suggested. 14. See comment 12. 15. We revised the report text as suggested. 16. See comment 7. 17. Clarified text to indicate it is a call center service contract. In addition to the contact named above, Carol Henn, Mallory Barg Bulman, John P. Stradling, and Farahnaaz H. Khakoo made significant contributions to this report. Barbara D. Bovbjerg, Tamara E. Cross, Lara Lee Laufer, Ramona L. Burton, Matthew J. Saradjian, Patrick G. Bernard, Michael R. Volpe, Adam Vodraska, Andrew J. Stephens, Richard S. Krashevski, Gregory H. Wilmoth, Donald R. Neff, William T. Woods, and Ruth DeVan also provided key assistance.
The Federal Retirement Thrift Investment Board (FRTIB) is charged with managing the Thrift Savings Plan (TSP)--a key component of retirement savings for many federal employees--in the interest of its participants and beneficiaries. As part of a broader request on oversight of FRTIB, GAO reviewed (1) the administrative expenses of FRTIB and key components of these expenses, (2) whether the administrative expenses of FRTIB are the result of practices consistent with federal regulations and similar functions at other agencies, and (3) FRTIB's current method of benchmarking administrative expenses. GAO reviewed FRTIB's budgets, audited financial statements, a benchmarking study, and written responses to questions that GAO submitted. GAO also reviewed the regulations that guide FRTIB's spending. FRTIB's administrative expenses ranged from a peak of $101 million in fiscal year 2000 to $83 million estimated for fiscal year 2006. Only the 2001 administrative expenses were lower, reflecting the termination of a key contract. In real terms, FRTIB's administrative expenses in 2006 were at a 7-year low. Throughout this period more than half of FRTIB's administrative expenses went towards purchasing services from outside entities--private contractors and other government agencies. The next largest share of FRTIB's budget was for miscellaneous expenses, such as printing and information technology. Additional administrative expenses were spent on compensation of FRTIB's 65 employees, rent, and travel. FRTIB has established practices that are consistent with federal regulations--for acquisition, compensation, and travel. There are some areas, however, that suggest opportunities for future savings. The amount FRTIB pays per square foot is consistent with average rental rates that the General Services Administration (GSA) cites for nearby available properties. However, FRTIB rents more space per employee than GSA would recommend. Given recent downsizing, FRTIB has begun looking into consolidating its office space. Additionally, opportunities exist to reduce the travel expenses of TSP board members traveling to Washington, D.C. FRTIB's current method of benchmarking TSP participants' investment fees against those charged by 401(k) plans is a very important measure for participants--and the TSP compares favorably on this measure. However, looking only at an aggregate measure provides insufficient information to judge whether individual activities are being conducted either to achieve the best performance or in the most efficient manner. Benchmarking by individual activity permits an organization to compare its performance with standards or "best in class" in a specific activity.
In accordance with Recovery Act requirements, Education established the RTT grant fund to encourage states to reform their K-12 education systems and to reward states for improving student outcomes, such as making substantial gains in student achievement and improving high school graduation rates. States competed for RTT grant funds based on reforms across four areas: adopting standards and assessments that prepare students to succeed in college and the workplace and to compete in the global market; building data systems that measure student academic growth and success and inform teachers and principals about how they can improve instruction; recruiting, developing, rewarding, and retaining effective teachers and principals, especially where they are needed most; and turning around the lowest-achieving schools. Education awarded RTT grants in three phases. Twelve states received grants in 2010 in Phases 1 and 2 to support the design and implementation of their teacher and principal evaluation systems and other RTT reforms. Award amounts ranged from $75 million to $700 million (see table 1). States were required to subgrant at least 50 percent of their total grant award to districts that chose to participate in RTT. The 4-year grant period began on the date funds were awarded to the state. States must obligate all funds within that period, and they have 90 days following the end of their grant period to liquidate all obligated funds unless they receive a no-cost extension. Education may grant extensions for states beyond the 90 days on a case-by-case basis.and Phase 2 funds not obligated and liquidated by September 30, 2015, will revert to the U.S. Treasury. Education identified 19 primary criteria to guide peer reviewers in the selection of states for RTT grants (see table 2). The criterion—improving teacher and principal effectiveness based on performance— established the RTT guidelines for teacher and principal evaluation systems. Reviewers evaluated the state’s plan to ensure its participating RTT districts (1) measure student growth for each individual student; (2) design and implement evaluation systems, developed with teacher and principal involvement, that include multiple rating categories that take into account data on student growth as a significant factor; (3) evaluate teachers and principals annually and provide feedback, including student growth data; and (4) use these evaluations to inform decisions regarding professional development, compensation, promotion, retention, tenure, and certification. Education defines student growth as the change in student achievement for an individual student between two or more points in time. For students in grades and subjects that are tested by state standardized tests, Education defines student achievement as the score received on the state’s assessments required under the ESEA. For students in grades and subjects that are not tested by state standardized tests, Education defines student achievement based on alternative measures of student learning and performance. These measures include student scores on pre-tests and end-of-course tests, student performance on English language proficiency assessments, and other measures of student achievement that are rigorous and comparable across classrooms. Student achievement for students in tested grades and subjects can also be assessed using other measures as appropriate, including the same measures as students in nontested grades and subjects. Education provided background information in its notice of proposed priorities, requirements, definitions, and selection criteria for RTT on why it included student growth as a factor in its criteria for teacher and principal evaluation. Education noted the difficulty in predicting teacher quality based solely on the qualifications that teachers bring to the job. The department cited research on the limited predictive power of measures such as certification, education, and years of experience, and research on the value of measuring student growth to assess teacher quality. In response to public comments that expressed concern about the use of student growth data as the sole means to evaluate teachers and principals, Education revised its definitions of an effective teacher and effective principal to require that multiple measures be used to assess effectiveness, with student growth as a significant factor. Education also provided examples of these supplemental measures, such as multiple observation-based assessments of teacher performance and high school graduation rates as a measure for evaluating principals. Education also established criteria for peer reviewers to consider a state’s capacity to sustain its reforms. The criterion—building strong statewide capacity to implement, scale up, and sustain proposed plans—required reviewers to assess the extent to which the state had a plan to ensure sufficient capacity and use stakeholder support to implement its plans. States were evaluated on, among other things, the extent to which they demonstrated that they would provide strong leadership and dedicated teams to implement the reforms and use their fiscal, political, and human capital resources to continue successful grant-funded reforms after RTT funds are no longer available. Education is responsible for fiscal and programmatic oversight of all aspects of RTT, reviewing and responding to states’ requests to amend their RTT applications, and providing technical assistance. To monitor states’ progress, Education established a program review process that includes ongoing conversations with grantees, on-site program reviews, grantee self-evaluations, and meetings with Education officials. As we reported previously, Education uses a common set of questions to oversee state progress and to address specific needs and challenges of each grantee. Education also publishes annual reports to the public summarizing the progress of each state. To provide technical assistance, Education established the Reform Support Network (RSN), a 4-year, $43 million technical assistance contract with ICF International, which works with Education to support RTT states. Education’s process for reviewing and approving changes to a state’s RTT plans includes reviewing the state’s approved application, budget, and scope of work. According to Education’s guidance, an RTT grantee must submit an amendment request for (1) a proposed revision that constitutes a change in activities from the approved grant project, regardless of budgetary impact; (2) budgetary changes, including transfers among categories or programs, that exceed $500,000 of the current approved budget; or (3) changes to the list of districts participating with the grantee’s RTT plan. Education will not approve amendment requests that would change the overall scope and objectives of the approved proposal, fail to comply with the terms of the award or the statutory and regulatory provisions of the program, or violate the general principles of the program. Education’s Institute of Education Sciences’ National Center for Education Evaluation and Regional Assistance is conducting two studies that relate to RTT teacher and principal evaluation systems. One study will assess the RTT and School Improvement Grant programs and whether these programs are related to improvement in student outcomes. The results of this study, which will not specifically assess the impact of teacher and principal evaluations on student outcomes, are expected in 2014. In the second study, experimental teacher and principal evaluation systems will be implemented in schools in eight districts in order to study their effects on factors such as student achievement and teacher and principal mobility. A report on this study is expected in 2015. The RTT states provide districts with varying amounts of flexibility to develop their evaluation systems. For example, some RTT states developed evaluation systems for use by all districts, unless a district develops an alternate evaluation system that meets state requirements. In other states, districts can develop their own evaluation systems within guidelines provided by the state, and the state must approve each district’s system.state or the district, districts evaluate teachers and principals using multiple measures that assess professional practice and student academic growth (see fig.1). According to state officials, 6 of the 12 RTT states fully implemented both their teacher and principal evaluation systems by school year (SY) 2012- 13 (see fig. 2), though their success in meeting their original target date for implementation varied. The states that fully implemented their systems evaluated all teachers and principals in RTT districts, according to state officials. The six states that fully implemented both teacher and principal evaluation systems targeted SY 2011-12 for full implementation in their RTT applications. Three of the six states met that target and SY 2012-13 was their second school year of full implementation. The other three states did not meet the targets set in their applications, but did fully According to Education’s implement their systems in SY 2012-13.amendment approval letters, states shifted implementation time frames, in part, because they needed additional time to develop student academic growth measures. For example, Delaware required an additional year to develop measures for its student academic growth component, which state officials said resulted in a better evaluation system. The six states that did not fully implement both their teacher and principal evaluation systems in SY 2012-13 either piloted or partially implemented evaluation systems, according to state officials (see fig. 2). Based on the targets set in their RTT applications, four of the six states originally planned to fully implement by SY 2012-13 but are instead piloting or partially implementing their systems. The proportion of teachers and principals participating in pilots varied. According to state officials, Hawaii’s teacher evaluation pilot covered about 30 percent of its teachers, and Maryland’s evaluation systems pilot covered about 14 percent of its teachers and principals in RTT districts. Among the four districts we visited in Maryland, district officials said the percentage of teachers who participated in the districts’ pilots ranged from about 4 percent to 100 State or district officials in four of the six states expressed percent.some concerns about their readiness for full implementation. For example, officials in one Maryland district that piloted with about 4 percent of teachers said they will move from learning about the system to full implementation without sufficient time to address issues that arose during the pilot. Similarly, officials in another Maryland district that piloted with about 5 percent of teachers and 19 percent of principals said the district did not have sufficient time to work with teachers and principals on the new evaluation systems and would have benefited from another pilot year. The Maryland district officials said that two individuals were responsible for all of the evaluation systems work. These officials added that they anticipate budget and staff reductions as they move from their pilot, in which about 100 teachers and 10 principals were evaluated, to full implementation that will cover more than 3,000 people. Due in part to the difficulty of managing many changes simultaneously, including new curriculum and assessments in many states, in June 2013 Education offered states that have received ESEA waivers or RTT grants the option to request permission from Education to delay the use of their new evaluation systems to inform personnel determinations and consequences for up to 1 year. Education officials noted that many states are already successfully implementing these changes or have requirements in state law about implementation timeframes and thus may not need to request the waiver. teachers and principals, such as retention rewards and dismissal. According to Hawaii officials, the state plans to fully implement its teacher evaluation system in SY 2013-14, but all consequences related to evaluations will be added the following school year. In several states, RTT districts decide how to use evaluation results to determine consequences. Ohio officials said that RTT districts were required to use evaluation results to inform some personnel decisions—including professional development, retention, and pay for performance—and the state surveyed RTT districts to confirm that they did so. Tennessee officials said that RTT districts were required to use the results of evaluations to inform certain personnel decisions, such as employment, compensation, and dismissal, but that the state did not prescribe the consequences attached to different ratings. State or district officials in most RTT states (8 of the 12) said they had difficulty developing and using student learning objectives (SLOs) to assess student academic growth for teachers. SLOs measure student academic growth for teachers in nontested grades and subjects, which represent 65 to 75 percent of teachers nationwide, according to an RSN report. SLOs are learning objectives for groups of students, such as students in a social studies class, that use a specific measure, such as a course exam, to track academic progress throughout a school year.However, some RTT state and district officials said it can be difficult to ensure that these learning objectives are rigorous and accurately measure student learning. Tennessee officials said that while SLOs are popular and promising in theory, they are difficult to reliably implement because some teachers set non-rigorous goals in order to get high scores. Tennessee officials further explained that some teachers selected a schoolwide social studies score for their SLO measure—despite having no connection to the subject—because students did well on that exam, rather than selecting learning objectives relevant to their own subject matter. Officials in three Maryland districts said determining how to measure student academic growth using learning objectives was a challenge because, for example, they may have difficulty assessing students’ abilities when they enter a class, not just when they leave. Officials in a New York district described the difficulty of implementing learning objectives in their small, rural district (see sidebar). Despite these challenges, RTT state and district officials said that SLOs improved their evaluation systems, in part by engaging teachers in the evaluation process and by leading to more in-depth discussions about teacher performance. To address some of their challenges, RTT states developed guidance, templates, or model learning objectives to help districts develop and use SLOs. In addition, states participated in an RSN-sponsored working group on developing SLOs. They could also access RSN guidance from Education’s website that outlined the benefits of learning objectives and provided information about the elements that comprise rigorous, high-quality learning objectives. Some RTT state and district officials said it was difficult to ensure that principals assess teacher professional practice consistently. For example, officials said it was challenging to ensure consistency in how principals use classroom observations and other evidence, such as lesson plans, to State or district officials in 6 of assess a teacher’s instructional methods. the 12 RTT states expressed concerns that, for example, some principals may not be appropriately identifying teachers who were ineffective and rating them accordingly. Officials in a few of these states attributed this to principals lacking the skill to differentiate between effective and ineffective teachers, or the will to rate teachers in lower categories or to rate them lower than under the prior evaluation system. Officials in Tennessee and in two North Carolina districts said evaluation data have shown that some teachers with low scores on their student academic growth component received high professional practice ratings. They said this may indicate that some principals are inflating scores or not identifying lower-performing teachers and providing critical feedback. Officials in another North Carolina district described a different concern about the mismatch in professional practice ratings and student academic growth. They noted that student academic growth data are not available until the following year and might influence how some principals assess teachers in the year in which the data become available. For example, after receiving data that shows a teacher demonstrated good student academic growth the prior year, a principal might overlook poor classroom management when observing the teacher. Organizations representing teachers and principals also raised concerns about evaluation consistency (see sidebar). State, district, or union officials in six RTT states described efforts to improve consistency in principals’ evaluations of teachers, generally through training. In New York, officials from a state teachers’ union said they provided training to more than 750 principals on ensuring consistency when conducting teacher evaluations. Tennessee officials said that during the first year of implementation, principals participated in 4 days of training and had to pass a test in order to perform classroom observations. During the second year, Tennessee identified principals who did not evaluate teachers appropriately and provided them with additional support and coaches. Officials also said that Tennessee plans to make its certification test more rigorous. Officials in one North Carolina district said that, in addition to providing state training and workshops on evaluation consistency, district administrators conduct informal classroom walk-throughs to observe teachers and then discuss rating consistency while comparing their notes with the principal’s observation ratings. State or district officials in 11 of the 12 RTT states discussed the difficulty of addressing teacher concerns about the scale of evaluation reform. According to these officials, teachers were concerned about some of the significant changes in the new systems, such as the use of student academic growth data in evaluations and using evaluation results to make personnel decisions (e.g., retention or compensation). For example, state, district, and union officials in Maryland said that teachers did not trust the validity of the state test scores used in some of the student academic growth measures. Officials in one New York district were concerned generally about the rise in annual testing and its use in evaluations to inform personnel decisions. District and union officials in New York said the release of teacher evaluation ratings to parents added to concerns about evaluation systems. Officials in one small district said their teachers were particularly concerned because protecting their anonymity might be difficult even if data are aggregated and not linked to individual teachers. Officials in three states and one district said they had difficulty convincing teachers that evaluation systems were focused on professional development, rather than consequences. Some RTT state and district officials said the simultaneous transition to new state assessments and the Common Core curriculum—a single set of educational standards in language arts and math—increased teacher concerns about consequences. For example, North Carolina officials said teachers were concerned about the fairness of measuring student academic growth while schools are implementing a new curriculum. In some RTT states, according to state and union officials, lengthy collective bargaining processes or lawsuits slowed implementation efforts. State and district officials said they took steps to address teacher concerns, in part by involving teachers in the design and implementation of the evaluation systems and through ongoing communication with teachers. State or district officials in 10 of the 12 RTT states highlighted efforts such as teacher participation on committees that designed the systems, teacher involvement in national training workshops, and regular communication and feedback from teachers on implementation. In addition, officials from all three state organizations representing teachers said they helped develop the legal framework or overarching standards for their states’ evaluation systems and participated on committees or provided training to teachers and principals to support their state’s efforts. To reduce teacher concerns once reform efforts were under way, officials in Maryland regularly distributed a document to key stakeholders that, among other things, provided updates related to evaluation components. Officials in Georgia said they made presentations in the community, held focus groups in districts, and provided training to help manage the culture shift to the new evaluation system. State or district officials in most RTT states (9 of the 12) said they faced fewer concerns related to principal evaluations due to greater principal support, the smaller scale of implementation, or because principals were used to being evaluated based on student performance. According to North Carolina officials, superintendents used student academic growth in principal evaluations prior to RTT, so principals did not have the same level of concern as teachers. In addition, North Carolina officials noted that the state had 2,600 administrators compared to 95,000 teachers, which made principal evaluation easier to implement. Hawaii officials said implementing principal evaluations was a generally collaborative and productive process for several reasons. For instance, they said principal associations were relatively easy to work with, administrator assessments already existed, and principals understood the need for a new evaluation system and contributed significantly to its design. In another state, officials from a principals’ association echoed the view that principals were accustomed to being evaluated on student academic growth and added that principals in their state were more concerned about teacher evaluations than their own evaluations. Insufficient state and district capacity challenged RTT states’ efforts to design and implement their evaluation systems (see fig. 3). State or district officials in most of these states said they lacked either sufficient staff or needed expertise when they began to reform their evaluation systems. Some state officials also said they faced capacity challenges related to supporting district efforts, such as reviewing and approving district evaluation systems and providing technical assistance. For example, Florida officials said that, because their RTT districts had the flexibility to design their own systems, it was difficult to develop solutions to challenges that would be applicable to all RTT districts. State and district officials said that at the local level, districts had difficulty managing principal workloads or prioritizing evaluation reform amid multiple educational initiatives. For example, officials in a New York district said that the time commitment required for observing and evaluating teachers prevented some principals from thoroughly reviewing evidence submitted for evaluations or providing meaningful feedback to teachers. District officials in New York and Maryland told us that their evaluation reform efforts took precedence over other initiatives, such as implementation of the Common Core curriculum. Building capacity to enact education reforms has been a recurring challenge for states and districts, as we have discussed in previous reports. While RTT was designed to encourage education innovation and reform—rather than covering all costs of reform efforts—several state and district officials cited the high cost of designing and implementing evaluation systems as a challenge. For example, officials in Hawaii and Delaware (see sidebar) noted that they underestimated how much it would cost to develop these systems. Similarly, officials in 7 of the 12 districts we spoke with said their RTT funds did not cover the costs of reforming their evaluation systems. For example, one small, rural New York district spent about $62,400 on its teacher and principal evaluation systems in addition to the $22,856 it received in RTT funds. Other New York districts faced similar challenges. A 2011 survey conducted by the New York State Council of School Superintendents shows that 81 percent of responding superintendents were concerned that cost considerations might prevent their districts from soundly implementing new evaluation requirements. Cost may have been more of a challenge for some districts because they were responsible for a significant part of the design and implementation work. Six of the 7 districts in which officials raised cost as a capacity challenge were in Maryland and New York, both of which provide RTT districts with significant flexibility to design their own systems. Officials in Tennessee explained that some RTT districts in their state did not have funding concerns because they used the evaluation system and data system provided by the state. States and districts responded to capacity challenges through different efforts to supplement their staff and resources (see fig. 4). Several RTT states also submitted amendment requests and received approval from Education to shift funds among RTT projects to provide additional funding for particular aspects of their evaluation systems. For example, Tennessee shifted approximately $1.1 million to support, among other things, additional training on evaluation systems because the state did not originally estimate sufficient funds for this purpose. Similarly, New York increased its budget for its evaluation systems by $11.9 million by shifting funds to develop its student academic growth model, pilot evaluation system software, and provide additional resources to districts. State or district officials in most RTT states (10 of the 12) said that fewer staff or other resources after RTT grant funds are no longer available could affect their ability to sustain their evaluation systems. For example, Rhode Island officials said they will likely lose staff that they hired using RTT funds because the state may not be able to use other education funds to make these positions permanent. Officials in New York said that with the loss of RTT funding, the state will have fewer staff to review district evaluation plans every year and to provide technical assistance to districts, as well as to manage the analysis of statewide evaluation data. District of Columbia officials were concerned that without RTT funds, they would be unable to pay the contractor that operates the student academic growth model used by its charter school districts. Officials from all 12 RTT states said they are considering how to sustain their evaluation systems after RTT grant funds are no longer available. Officials in a few of these states discussed some of the difficulties they have faced in preparing for sustainability, such as turnover in state leadership and uncertainty over future funding levels, and a few officials provided examples of how they might address sustainability. For example, Hawaii officials said they are considering how to reallocate funds to sustain the systems but are concerned about the availability of other federal and state funds to do so. Georgia officials said they are collaborating with stakeholders to develop a sustainability plan—to be completed in summer 2013. In addition, Florida officials said they were working to ensure that they have in-house expertise on all aspects of the evaluation systems. For example, contractors who assisted with the state’s student academic growth component will train state staff on how to run the models. Officials in 5 of the 12 RTT states told us more information from Education could help address their concerns about sustaining their evaluation systems and other reforms after RTT grant funds are no longer available. Specifically, state and district officials from some of these states told us they were concerned about or would like guidance on how to use other federal funds to support their evaluation systems. For example, officials in one state said Education issued some guidance on acceptable uses of ESEA funding, but could provide more concrete information on how best to leverage those funds for RTT initiatives. In addition, Education officials told us a few states have requested technical assistance to support their sustainability planning. Officials from four states told us it was too soon for them to know whether they would need Education’s assistance with sustainability. Education developed a new process to monitor RTT states’ progress toward meeting their RTT goals, including those related to teacher and principal evaluation systems. Education officials said that the RTT monitoring process differs from the department’s other monitoring efforts in that Education has more frequent contact with the states in order to identify and address implementation challenges. In addition, the new process emphasizes states’ continuous improvement and quality of RTT reforms, rather than focusing solely on compliance with laws and regulations and the ability of states to meet their time frames.said the intensity of communication with RTT states and the quality standards are greater for RTT than for Education’s previous monitoring efforts. Education developed the new process to provide assistance to RTT states as they implement comprehensive reforms and to differentiate support based on individual state needs. Education officials said they work to identify and address obstacles to the goals states established in their RTT plans through ongoing communication, including monthly monitoring calls, the amendment consideration process, and other contacts with RTT state officials. To assess the quality of implementation efforts, officials said they consider each state’s progress toward its goals and timelines, risk factors and strategies for addressing them, and the state’s own assessment of its quality of implementation, among other factors. For example, in addition to verifying that a state implemented an evaluation tool, such as a test or performance measure, Education officials work with the state to ensure that the tool is meeting the state’s needs. Instead of focusing solely on RTT compliance, program officers also help identify areas in which Education can assist states in meeting their goals, according to Education officials. Officials from 8 of the 12 RTT states expressed generally positive views about Education’s RTT monitoring activities. Some said, for example, that Education officials were collaborative, well-informed, and that they generally provided useful feedback. For example, officials from one state said that Education staff were very detailed and thorough in monthly monitoring calls and that they usually provided actionable feedback. Officials from another state said they spoke almost daily with Education officials and received strong support. They noted that, as a result, monitoring reviews were not stressful, and they were not surprised by the results. Officials from another state said they appreciated the discussion with Education officials about the state’s amendment requests and how Education worked with them to ensure that the state maintained its original RTT goals. While RTT state officials expressed generally positive views about the monitoring process overall, officials in nine states expressed concerns about specific aspects of the process, including delays in the amendment process, time-consuming monthly calls and related requirements, and slow feedback from Education after site visits. Officials from one state said monitoring requirements seemed more burdensome than those for other federal education programs. Education officials stated that they have revised some aspects of their monitoring process in response to state feedback. For example, they modified the monthly monitoring call and onsite review protocols, revised the amendment process and dollar threshold amounts that require approval, and worked to explain the rationale and use of the information Education requests. To ensure that states are held accountable for meeting their RTT goals for teacher and principal evaluation systems, Education may take the following corrective actions for states that have not demonstrated adequate progress in implementing their systems: Conditional amendment approval. If Education has concerns about a state’s requested amendments to its RTT plans, it may grant conditional approval, requiring the state to provide additional information over a period of time. For example, Rhode Island submitted a proposed amendment requesting a change related to its use of SLOs. Education approved the request on the condition that the state provide additional information, such as quarterly progress updates during SY 2012-13 and additional reports. In addition, Maryland received approval to decrease the percentage of the evaluation component that is based on student academic growth models on the condition that the state provide Education a plan for a statewide field test of its evaluation systems. Maryland was also required to commit to measuring student academic growth using common assessments of high school teachers and principals when those assessments are available, among other requirements. Education may also elect not to approve an amendment request. High-risk status. Education placed 2 of the 12 RTT states—Georgia and Hawaii—on high-risk status because officials determined that the states required intensive attention and support in order to meet their RTT goals. In July 2012, Education placed the teacher and principal evaluation portion of Georgia’s RTT grant on high-risk status because officials were concerned about the overall strategic planning, evaluation, and project management of the evaluation system. Education officials also expressed concern that Georgia had requested two major amendments that seemed to constitute significant changes to the evaluation system in the state’s approved plan. As a result of the high-risk designation, Georgia was required to provide Education a revised work plan for its system, monthly updates in accordance with the work plan, and related information. As of July 2013, Education officials said the evaluation system portion of Georgia’s RTT grant remains on high-risk status because of Education’s continued concerns about the quality of implementation. In December 2011, Education designated Hawaii’s entire RTT grant as high-risk because the state experienced major delays and made inadequate progress on implementing its systems and because the scope and breadth of amendment requests indicated a potentially significant shift in the state’s approved plans. Education temporarily placed Hawaii on a cost-reimbursement basis, which required the state to submit receipts for expenditures to the department prior to drawing down grant funds. The state was also required to submit documentation prior to obligating funds and to submit a revised scope of work and budget. As of July 2013, Education had removed Hawaii’s high-risk designation based on the state’s demonstrated progress in implementing its RTT reforms, including its evaluation systems. Additional information. Education has required certain reporting or follow-up information, other than that included in conditional approval of amendments or high-risk status, and has used other measures deemed appropriate. For example, according to Education officials, one state that experienced procurement problems is required to provide monthly procurement information to Education. Withholding of funds. Although Education may withhold grant funds from states if they do not comply with the terms of the award,Education officials said they have not withheld funds from any RTT state. Officials added that states have always demonstrated progress toward addressing Education’s concerns. Education helps states meet their RTT goals and implement high-quality reforms by providing technical assistance, including access to experts and information on options for evaluation systems. Education officials said technical assistance helps states resolve implementation issues, including those identified through the monitoring process. Most RTT federal assistance is provided by the contractor-supported RSN, and Education also provides some technical assistance to RTT states.officials said they work closely with Education staff to learn about the RSN types of technical assistance that might be useful to states on teacher and leader effectiveness, including teacher and principal evaluations, as well as other RTT topics. From 2010 through March 2013, RSN provided technical assistance on teacher and leader effectiveness in group settings—such as webinars and in-person meetings—to RTT states, as well as individualized technical assistance (see fig. 5). RSN officials said they provide individualized assistance to states when requested, particularly for states in more advanced stages of implementation with needs that could not be met through larger group technical assistance activities. RSN also developed publications related to teacher and leader effectiveness, including case studies, tool kits, and lessons learned, and has provided them through the RTT grantee web portal. These publications included longer reports on school reform and shorter briefs, such as a paper that described rules governing classroom observations used in teacher evaluations in selected RTT states. RSN has worked to strengthen the quality of its technical assistance and adapt to states’ changing needs, according to Education and RSN officials. They said that early in the contract, RSN revised its approach to better meet the needs of states. For example, in response to state feedback, RSN provided states with access to education practitioners who had worked in schools rather than experts without hands-on experience, as they had done in the initial stages of the contract. RSN officials also said issues for which states requested technical assistance changed as implementation progressed, and RSN adapted its technical assistance accordingly. For example, early in implementation, states often requested assistance with designing evaluation systems, communicating with stakeholders, working with unions, and measuring growth in nontested grades and subjects, according to RSN officials. As implementation progressed, states requested assistance with issues such as the consistency of observations and the sustainability of evaluation systems. Officials from 10 of the 12 RTT states told us that Education’s technical assistance related to teacher and principal evaluation systems was generally helpful, and officials in several states said assistance had improved since the start of the contract. Officials from Hawaii said RSN had helped states by sharing existing knowledge and developing new information. Officials in Massachusetts told us that RSN’s in-person meetings had been especially helpful because they provided a platform for states to share best practices. Although most states were complimentary of RSN assistance, officials in some states said RSN and Education could improve technical assistance by providing additional information on specific topics, such as information about states that have successfully implemented evaluation systems and more opportunities to share lessons learned. A recent survey by RSN also indicates that states are generally satisfied with the contractor’s technical assistance on teacher and principal evaluation systems. In March 2013, RSN surveyed and obtained responses from officials in all RTT states regarding their perception of technical assistance, including assistance provided through the teacher and leader effectiveness community of practice. Sixteen of the 18 states that participated in teacher and leader effectiveness assistance reported that they were satisfied or very satisfied with the assistance, and the remaining states were neutral. On multiple dimensions, the state officials rated assistance with teacher and leader effectiveness higher than other areas and higher than RSN activities overall. States also identified opportunities for strengthening technical assistance by ranking potential topics on the basis of impact and urgency. For teacher and leader effectiveness, states ranked continuous improvement of teacher evaluation as one of the top areas of interest, according to RSN. Officials in a few states mentioned that they would like more opportunities to collaborate and learn from one another. Education plans to provide information to RTT states on sustaining teacher and principal evaluation systems and other reforms, but Education officials said they have not yet done so. Education planned to launch a new workgroup in the summer of 2013 to help states consider how to sustain their evaluation systems after the RTT grant ends. Draft plans for the work group included providing expert and peer-to-peer support and developing a sustainability rubric. The plans also included providing workshops on sustainability efforts, including ones on state capacity, performance management, and communication, and eventually developing and sharing case studies. In July 2013, Education officials said they had postponed work in this area until fall 2013. Although Education has not asked states to provide specific plans for addressing sustainability, department officials said they have learned about state plans through ongoing communication with states. For example, a few states discussed their sustainability planning during monthly monitoring calls. In addition, Education obtained information on state sustainability strategies through RTT applications. In general, states provided this information on RTT reforms as a whole. We did not identify any states as having provided a sustainability strategy specific to teacher and principal evaluation systems. To allow states more time to accomplish the goals and deliverables they committed to in their RTT plans, Education officials will consider requests for no-cost extensions, but they have not determined how to provide technical assistance during the extension period. States may request extensions on a case-by-case basis for more time to spend awarded funds for those aspects of their RTT reforms that require additional work. If approved, a state could have until September 30, 2015 to obligate and liquidate its remaining RTT grant funds. According to Education officials, states that request no-cost extensions will be required to provide to Education their plans to address sustainability, among other information. As of July 2013, Education officials had approved two no-cost extensions related to teacher and principal evaluation systems, and officials in an additional state told us they had submitted an extension request related to their evaluation systems. Officials in 6 more of the 12 RTT states told us they are considering requesting extensions related to their evaluation systems. However, it is not clear what technical assistance would be available to states approved for no-cost extensions. The current contract for technical assistance ends in September 2014, and RSN officials said they do not have plans to sustain technical assistance beyond the duration of the current contract. Education officials said they were working to identify options for providing continued technical assistance. GAO-11-658. student academic growth data. In addition to the materials currently available, Education officials told us that they were developing additional resources and materials on topics such as SLOs, observation rubrics, rating inflation, teacher engagement, data analytics, and leadership development. Education identified some products for targeted dissemination and presented them in conferences, cross-program meetings, and to organizations such as the National Governors Association, in order to promote awareness of the resources available. Education is also working to develop a more robust dissemination plan that includes ways to reach people other than state-level leaders, according to officials. Education created the RTT grant program to encourage sweeping changes in K-12 education. RTT spurred changes to the way states and districts evaluate their teachers and principals, particularly with the addition of student academic growth data as a factor in assessing effectiveness. Education has been proactive in monitoring states’ progress in implementing their evaluation systems, and the department’s continued monitoring and assistance will be important to help RTT states overcome challenges and implement the reforms to which they committed. In addition, Education’s new monitoring process has resulted in a wealth of information on states’ efforts. As a result, Education is uniquely positioned to use the lessons learned from RTT states to inform other states’ efforts to improve teacher effectiveness and ultimately raise student academic achievement. We provided a draft of this report to the Department of Education for review and comment. Their comments are reproduced in appendix I. Education also provided technical comments, which we incorporated into the report. We are sending copies of this report to the appropriate congressional committees and the Secretary of Education. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Elizabeth Morrison, Assistant Director, Nisha R. Hazra, Marissa Jones, and Michael Kniss made significant contributions to this report. Also contributing to this report were Deborah Bland, Sarah Cornetto, Jamila Jones Kennedy, Amy Moran- Lowe, Jean McSween, Mimi Nguyen, Jason Palmer, Kathleen van Gelder, and Rebecca Woiwode. School Improvement Grants: Education Should Take Additional Steps to Enhance Accountability for Schools and Contractors. GAO-12-373. Washington, D.C.: April 11, 2012. Race to the Top: Characteristics of Grantees’ Amended Programs and Education’s Review Process. GAO-12-228R. Washington, D.C.: December 8, 2011. Race to the Top: Reform Efforts are Under Way and Information Sharing Could Be Improved. GAO-11-658. Washington, D.C.: June 30, 2011. Department of Education: Improved Oversight and Controls Could Help Education Better Respond to Evolving Priorities. GAO-11-194. Washington, D.C.: February 10, 2011. Grant Monitoring: Department of Education Could Improve Its Processes with Greater Focus on Assessing Risks, Acquiring Financial Skills, and Sharing Information. GAO-10-57. Washington, D.C.: November 19, 2009. Student Achievement: Schools Use Multiple Strategies to Help Students Meet Academic Standards, Especially Schools with Higher Proportions of Low-Income and Minority Students. GAO-10-18. Washington, D.C.: November 16, 2009. No Child Left Behind Act: Enhancements in the Department of Education’s Review Process Could Improve State Academic Assessments. GAO-09-911. Washington, D.C.: September 24, 2009. Teacher Quality: Sustained Coordination among Key Federal Education Programs Could Enhance State Efforts to Improve Teacher Quality. GAO-09-593. Washington, D.C.: July 6, 2009. No Child Left Behind Act: Improvements Needed in Education’s Process for Tracking States’ Implementation of Key Provisions. GAO-04-734. Washington D.C.: September 30, 2004. Recovery Act: Opportunities to Improve Management and Strengthen Accountability over States’ and Localities’ Uses of Funds. GAO-10-999. Washington, D.C.: September 20, 2010. Recovery Act: One Year Later, States’ and Localities’ Uses of Funds and Opportunities to Strengthen Accountability. GAO-10-437. Washington, D.C.: March 3, 2010. Recovery Act: Status of States’ and Localities’ Use of Funds and Efforts to Ensure Accountability. GAO-10-231. Washington, D.C.: December 10, 2009.
Education created RTT under the American Recovery and Reinvestment Act of 2009 to provide incentives for states to reform K-12 education in areas such as improving the lowest performing schools and developing effective teachers and leaders. In 2010, Education awarded 12 states nearly $4 billion in RTT grant funds to spend over 4 years. A state's RTT application and scope of work included the state's plans for development and implementation of teacher and principal evaluation systems by participating school districts. These systems assess teacher and principal effectiveness based on student academic growth and other measures, such as observation of professional practice. Currently, additional states are designing and implementing similar evaluation systems. GAO was asked to review RTT teacher and principal evaluation systems. This report examines (1) the extent to which the 2010 grantee states have implemented their teacher and principal evaluation systems, (2) the challenges the grantee states have faced in designing and implementing these systems, and (3) how Education has helped grantee states meet their RTT objectives for teacher and principal evaluation systems. GAO reviewed relevant federal laws, regulations, and guidance; analyzed RTT applications and documentation on each state's guidelines for their evaluation systems; and interviewed officials from all 12 states, selected districts, and Education. By school year 2012-13, 6 of 12 Race to The Top (RTT) states fully implemented their evaluation systems (i.e., for all teachers and principals in all RTT districts). However, their success in fully implementing by the date targeted in their RTT applications varied. Three of these states met their target date while three did not for various reasons, such as needing more time to develop student academic growth measures. The six states that did not fully implement either piloted or partially implemented. The scope of pilots varied. One state piloted to about 14 percent of teachers and principals while another piloted to about 30 percent of teachers. State or district officials in four of the six states expressed some concerns about their readiness for full implementation. Officials in most RTT states cited challenges related to developing and using evaluation measures, addressing teacher concerns, and building capacity and sustainability. State officials said it was difficult to design and implement rigorous student learning objectives--an alternate measure of student academic growth. In 6 states, officials said they had difficulty ensuring that principals conducted evaluations consistently. Officials in 11 states said teacher concerns about the scale of change, such as the use of student academic growth data and consequences attached to evaluations, challenged state efforts. State and district officials also discussed capacity challenges, such as too few staff or limited staff expertise and prioritizing evaluation reform amid multiple educational initiatives. Officials in 10 states had concerns about sustaining their evaluation systems. Education helps RTT states meet their goals for teacher and principal evaluation systems through a new monitoring process and through technical assistance. Education officials said the RTT monitoring process differs from other monitoring efforts in the frequency of contact with the states and the emphasis on continuous improvement and quality of RTT reforms. Officials from 8 of the 12 RTT states expressed generally positive views about Education's monitoring. When states have not demonstrated adequate progress, Education has taken corrective action. For example, Education designated two states as high-risk, which resulted in additional monitoring. Education provides technical assistance through a contractor; officials from 10 RTT states told us assistance related to evaluation systems was generally helpful. Education officials said they plan to provide RTT and nongrantee states with more information to support their efforts. GAO is not making recommendations in this report.
Nursing homes play an essential role in our health care system. They care for persons who are temporarily or permanently unable to care for themselves but who do not require the level of care provided in an acute care hospital. Titles XVIII and XIX of the Social Security Act establish minimum standards that all nursing homes must meet to participate in the Medicare and Medicaid programs. The states and the federal government share responsibility for oversight of the quality of care in the nation’s 17,000 nursing homes. Oversight includes routine and follow-up surveys to assess compliance with standards and enforcement activities to ensure that identified deficiencies are corrected and remain corrected. At the direction of the Congress, HCFA sets standards for nursing homes’ participation in Medicare and Medicaid. HCFA also contracts with state agencies to check compliance with these standards through surveys at least every 15 months. States also enforce their own licensing requirements in all state-licensed nursing homes, including those with Medicare certification, and check for compliance with these licensure requirements during standard surveys. States also conduct surveys in response to complaints. Enforcement of Medicare and Medicaid standards is likewise a shared responsibility. HCFA is responsible for enforcing standards in homes with Medicare certification—about 86 percent of all homes. When homes are found to have deficiencies at the most severe level, or when homes fail to correct deficiencies in a timely manner, HCFA policies call for states to refer these cases to HCFA, together with any recommendations for sanctions. HCFA normally accepts these recommendations but can modify them. States are responsible for enforcing standards in homes with only Medicaid certification—about 14 percent of all homes. As part of the Omnibus Budget Reconciliation Act of 1987 (OBRA 87), the Congress changed the focus of standards that homes needed to meet to participate in Medicare and Medicaid. Prior to OBRA 87, the Medicare and Medicaid participation standards focused on a home’s capability to provide care, not on the quality of care actually provided. Largely in response to a 1986 Institute of Medicine study, which recommended more resident-oriented nursing home standards, OBRA 87 refocused standards on the actual delivery of care and the results of that care. For example, the focus of the standards moved to such matters as a home’s performance in providing appropriate care for incontinence or for preventing pressure sores, and the performance would be evaluated by reviewing medical records and examining residents. To ensure that facilities would achieve and maintain compliance with the new standards, OBRA 87 also greatly expanded the range of enforcement sanctions. Studies of nursing home regulation had shown that many homes tended to cycle in and out of compliance with standards that were important to protecting residents’ health and safety, thereby placing nursing home residents in jeopardy. For example, in 1987 we reported that more than one-third of nursing homes reviewed failed to consistently meet one or more of the standards that were most likely to adversely affect residents’ well-being. These facilities were nevertheless able to remain in Medicare and Medicaid without incurring any penalties if the deficiencies were corrected in a timely manner. As such, there was no effective federal penalty to deter noncompliance. At that time, the only sanctions available were termination from the program or, under certain circumstances, denial of payments for new Medicare or Medicaid residents. OBRA 87 added several new alternatives, such as civil monetary penalties, and expanded the deficiencies that could result in denial of payments. (See table 1.) the Committee amendment would expressly allow a State to impose civil money penalties for each day in which a facility was found out of compliance with one or more of the requirements of participation, even if the facility subsequently corrected its deficiencies and brought itself into full compliance. This, in the Committee’s view, is essential to creating a financial incentive for facilities to maintain compliance with the requirements for participation (emphasis added). The Department of Health and Human Services (HHS) issued regulations implementing OBRA 87 in two stages. Regulations implementing standards were effective by October 1990, but enforcement regulations covering sanctions did not become effective until July 1995. According to a HCFA official, publication of enforcement regulations was delayed because of the controversial nature of the regulation and the workload associated with responding to the large volume of comments received in the rulemaking process. OBRA 87 gave the HHS Secretary authority to specify criteria as to when and how each sanction should be applied. In developing the regulations implementing these sanctions, HCFA proceeded on the assumption that, while all standards must be met and enforced, failure to meet a standard takes on greater or lesser significance depending on the circumstances and the actual or potential effect on residents. Thus, the regulations established an approach for determining the relative seriousness of each instance of noncompliance with standards. For each deficiency identified in the survey process, the approach places the deficiency in one of 12 categories, labeled “A” through “L” depending on the extent of patient harm (severity) and the number of patients adversely affected (scope). The most dangerous category (L) is for a widespread deficiency that causes actual or potential for death or serious injury to residents; the least dangerous category (A) is for an isolated deficiency that poses no actual harm and has potential only for minimum harm. Homes with deficiencies that do not exceed the C level are considered in “substantial compliance,” and as such, providing an acceptable level of care. The effect of HCFA’s categorizing is that for a home to be out of compliance, it must have one or more deficiencies that subject a resident to at least the potential for more than minimal harm. Identifying the scope and severity of a deficiency also provides the basis for three groups of enforcement sanctions, which may be required or optional. (See table 2.) Homes in substantial compliance are not subject to sanctions. For noncompliant homes referred to HCFA for sanction, the severity of the sanction that must or can be imposed generally increases with the severity of the deficiency. For each category in the scope and severity grid, a sanction from a particular group must be imposed and sanctions from certain other groups can be added. For example, a home with one or more deficiencies rated J or higher must receive a sanction from group 3, and HCFA has the option of levying additional sanctions from groups 1 or 2. HCFA regulations provide that the choice of sanctions is to take into account not only the severity and scope of the deficiency but also a consideration of prior performance, desired corrective and long-term compliance, and the number and severity of all the homes’ deficiencies together. Under their shared responsibility for Medicare-certified nursing homes, state agencies identify and categorize deficiencies and make referrals with proposed sanctions to HCFA. HCFA is responsible for imposing sanctions and collecting monetary penalties. Under HCFA’s policies, most homes are given a grace period, usually 30 to 60 days, to correct deficiencies identified in the standard or complaint surveys. States do not refer these homes to HCFA for sanction unless they fail to correct their deficiencies within the grace period. Exceptions are provided for homes with deficiencies rated J, K, or L and for homes that meet HCFA’s definition of a “poorly performing facility”—a special category of homes with repeat severe deficiencies. HCFA policies call for states to refer these homes immediately for sanction. HCFA also requires a notice period before the sanction takes effect. When a HCFA regional office receives a referral from a state, it reviews the case and the state’s recommendation, decides whether to impose a sanction, and notifies the home if a sanction is to be imposed. Under HCFA regulations, homes have 15 to 20 days to come into compliance, and if a home does so by the deadline, the sanction does not take effect. There are two major exceptions. One is a civil monetary penalty, which can be assessed retroactively even if a home corrects the problem. The other is when a nursing home is found to have a deficiency rated J, K, or L. In this circumstance, HCFA may put a sanction into effect after a 2-day notice period. National data on nursing home surveys for July 1995 to October 1998 showed that the proportion of homes with the most severe deficiencies remained at uncomfortably high levels throughout this period. The total number of homes not in compliance, even for the most serious deficiency categories, remained relatively steady. Furthermore, about 40 percent of the homes found to have serious deficiencies in a survey early in the period were found to have deficiencies of equal or greater severity in a subsequent survey late in the period. Compliance with nursing home standards of care continued to be a problem during the entire 3-year period we examined. Comparing the number of cited deficiencies per noncompliant nursing home during this period showed little overall change from the first, or base, survey (3.79) to the most recent survey (3.74). In the earlier set of surveys, 28 percent of homes had at least one deficiency in the two highest severity categories (actual or potential for death or serious injury and other actual harm); in the most recent set of surveys, the figure was 27 percent (see table 3). In the two highest severity categories, common deficiencies included inadequate attention to prevent pressure sores, failure to provide supervision or assistance devices to prevent accidents, and failure to assess residents’ needs or provide necessary care. Table 4 shows the most frequently cited violations in these severity groups for surveys conducted in the most recent survey period. Although most noncompliant homes eventually returned to compliance, many did not maintain this status. Among those homes cited for deficiencies at the two highest levels of severity during the base survey, about 40 percent were cited for deficiencies at the same or higher level of severity during the most recent survey. In other words, during the 3-year period, 4 of 10 homes that were found by the base survey to have caused actual or potential death or serious injury or other actual harm to residents had deficiencies (possibly different deficiencies) that were just as severe—or worse—in the most recent inspection. Although we focused our analysis on deficiencies in the most severe categories, we noted that among those homes with deficiencies considered to hold potential for more than minimal harm in the first survey, about 77 percent were cited for deficiencies (again, possibly different ones) at the same or higher level of severity during the most recent survey. To determine the role sanctions play in bringing about a greater degree of compliance, we focused on a sample of 74 homes that had been referred for sanctions. The case histories of these homes showed that sanctions helped bring the homes back into temporary compliance but provided little incentive to keep them from slipping back out of compliance. While several aspects of the sanction program, such as civil monetary penalties, have potential to provide the necessary incentive to better ensure continued compliance, certain HCFA policies or practices limited their effectiveness with these homes. The 74 homes we reviewed had been referred by the states to HCFA for possible sanctions a total of 241 times—on average, about 3 times each. All 74 homes also had at least one deficiency that caused actual harm to residents or placed residents at risk of serious injury or death. Some referrals were accompanied by a recommendation for one sanction, while others were accompanied by recommendations for two or more. The most common sanction initiated by HCFA was denial of payments for new admissions—176 times. HCFA also initiated 115 civil monetary penalties and 44 terminations. Many homes corrected their deficiencies after being notified that a sanction would be imposed. In these cases, HCFA rescinded the sanction. (See table 5.) For example, denial of payment never took effect in 97 of the 176 instances in which HCFA gave notice that a sanction would be imposed. Recision usually occurred because the facility corrected the deficiency before the effective date of the sanction. The ability of sanctions to help bring about corrective action is reflected in the fact that, at the time of our study, only 7 of the homes in our sample that were sanctioned with termination remained terminated from the Medicare and Medicaid programs. However, sanctions—or the penalties they carry—only temporarily induced homes into taking action to correct identified deficiencies, as many were again out of compliance by the time the next survey or follow-up inspection was conducted. Of the 74 homes we reviewed, 69 were again referred for sanctions after being found out of compliance once more—some went through this process as many as six or seven times. Table 6 shows some of the cases in our sample where homes had been cited for serious deficiencies, referred to HCFA for sanctions, and subsequently cited for serious deficiencies again. This yo-yo pattern of compliance and noncompliance could be found even among homes that were terminated from Medicare, Medicaid, or both. Termination is usually thought of as the most severe sanction and is generally done only as a last resort. Once a home is terminated, however, it can generally apply for reinstatement if it corrects its deficiencies. For three of the six reinstated homes in our group, the pattern of noncompliance returned. For example, a Texas nursing home was terminated from Medicare for a string of violations that included widespread deficiencies at the severity level of actual harm to residents. About 6 months after the home was terminated, it was readmitted under the same ownership. Within 5 months, state surveyors identified a series of deficiencies involving harm to residents, including failure to prevent avoidable pressure sores or ensure that residents received adequate nutrition. Other sanctions authorized by OBRA 87—increased state monitoring, appointment of a temporary manager to oversee the home while it corrects its deficiencies, and state-directed plans of correction (see table 1)—have so far been applied infrequently. All three are receiving limited use, state officials said, because of various cost and administrative concerns. For example, officials in three of the four states said they lacked a pool of qualified administrators to act as temporary managers. Michigan was an exception to this pattern. In the first quarter of 1998, Michigan entered into a contract with the Michigan Public Health Institute to provide oversight of facilities with significant compliance problems. Oversight activities focus on directed plans of correction and state monitoring. Sanctions have been unable to ensure continued compliance because several procedures for implementing sanctions can minimize their effectiveness or invalidate them altogether. Civil monetary penalties, a sanction with strong potential deterrent effect, were hampered by a growing backlog of appeals. Imposing sanctions without a grace period was seldom used because of restrictive HCFA guidance. And termination, the ultimate sanction because it removes homes from the program, had little effect because many homes were able to reenter the program with little consequence for their past actions and were given a clean slate for the future. Civil monetary penalties have an advantage in encouraging homes to remain in compliance—they can be applied retroactively to the date of initial noncompliance. In other words, they cannot be avoided simply by taking corrective action, and the longer the deficiency remains, the larger the penalty can be. HCFA initially planned to make wide use of the new sanctions when they were put in place but has since modified its policy by reserving civil monetary penalties for more serious deficiencies (G or higher in the scope and severity grid). However, the use of civil monetary penalties for even this narrow range of deficiencies has resulted in a growing backlog of appeals. Nursing homes can appeal civil monetary penalties before HHS’ Departmental Appeals Board. Appealed penalties are not collected until the case is closed, usually through the ruling of an administrative law judge or a negotiated settlement between HCFA and the nursing home. Nationwide, a lack of hearing examiners has created a backlog of about 620 cases awaiting decision as of August 1998, with some cases dating back to 1996. By February 1999, the backlog had grown to over 700 cases and is predicted to grow further. HHS budget documents estimated that each year at least twice as many appeals would be received as would be settled. This backlog creates a bottleneck for timely collections. For example, HCFA accounting records showed, as of September 1998, only 37 of the 115 monetary penalties imposed on the 74 homes we reviewed had been collected. Unless penalties are actually collected they have minimal deterrent effect. Large backlogs undermine the effectiveness of civil monetary penalties in two ways. First, they increase the pressure on HCFA to resolve the appeal by negotiating settlements—a strategy that helps somewhat in controlling the growth of the backlog but can also lower the size of the fine, potentially reducing the effect of the penalty. Second, even if the appeal goes to a hearing and a penalty is upheld, considerable time may have elapsed without the home having to pay. As a result, it is not surprising that some nursing home owners routinely appeal imposed penalties. For example, regional enforcement logs showed one large Texas nursing home chain appealed 62 of the 76 civil monetary penalties imposed on its nursing homes (including chain-owned homes that were not in our sample) between July 1995 and April 1998. These 62 penalties totaled $4.1 million. Under HCFA policy, HCFA can apply sanctions on an immediate basis (that is, without a grace period to correct deficiencies) to homes designated as poor performers and to homes that place residents in immediate jeopardy (actual death or serious injury or potential for such an outcome). Doing so can help encourage sustained compliance because eliminating the grace period means that homes are more likely to be affected by penalties. However, HCFA’s guidance for when to apply poor performer and immediate jeopardy designations has allowed severe and repeat violators to avoid immediate sanctions. Until September 1998, HCFA’s definition of a poorly performing home was so narrow that it excluded many nursing homes that had repeated deficiencies causing actual harm to residents. In our earlier report on California nursing homes, we found that 73 percent of homes cited repeatedly for harming residents did not meet HCFA’s definition of a poorly performing facility. In the other states we visited, we also found instances of severe and repeated deficiencies that were not designated as poor performers and thus avoided immediate sanctions. HCFA has since revised its definition to broaden the circumstances under which a nursing home could be designated as a poorly performing facility. The new definition includes homes with any deficiencies rated H or higher in the scope and severity grid on its current survey and in its previous standard survey or any intervening survey (including complaint investigations). HCFA said it would expand the definition in 1999 to include deficiencies rated G. The revision, however, narrowed the definition in certain other respects, such as shortening the period during which deficiencies could be considered from the previous two surveys to the most recent one. The revised definition also excluded F-rated deficiencies (widespread potential for more than minimal harm) from consideration of poorly performing facility status. Because the changes are so recent, it is too early to tell what their effect will be on the number of homes designated as poor performers. A second area—which HCFA has not addressed—involves referral of homes cited for deficiencies that contributed to the death of a resident. We found several examples where state surveyors cited the deficiency during a complaint investigation that took place some time after the incident and found that the deficient practice contributing to the death had ceased at the time of the investigation. Under HCFA policy, such deficiencies corrected at the time of the investigation are considered “past noncompliance” and are to be cited as isolated actual harm, level G in HCFA’s scope and severity grid. HCFA does not require homes with level-G deficiencies to be referred for sanctions. As a result, homes cited for deficiencies so severe that they contributed to resident deaths may not be referred to HCFA for sanctions at all. By allowing these homes to escape immediate sanction, much of the ability to deter future noncompliance is lost. Table 7 shows examples of homes that were not referred for immediate sanction. Another group of homes that can largely avoid the threat of immediate sanction even though they exhibited a pattern of recurring and serious noncompliance are those that have been terminated from Medicare and subsequently readmitted. After a terminated home has been readmitted in Medicare, HCFA policy prevents state agencies from considering the home’s prior record in determining if the home should be designated as a poorly performing facility, effectively giving the home a “clean slate.” This policy produces the disturbing outcome that termination could actually be advantageous to a home with a poor history of compliance because this history would no longer be considered in making enforcement decisions after it was readmitted to Medicare. Given the continuing spotty performance we found among those homes in our sample that had been terminated and subsequently reinstated, this policy merits reexamination. Two other aspects of HCFA’s use of termination also limit its effectiveness. First, HCFA typically paid terminated homes in our sample for 30 days after termination regardless of whether transfers of patients were under way. This policy in effect gives terminated homes 30 extra days of payment while they seek reinstatement. Second, HCFA generally used a short “reasonable assurance period” to determine if homes seeking reinstatement to Medicare had corrected their problems and were otherwise complying with the standards. While HCFA can make this period last up to 180 days, the homes we examined were given reasonable assurance periods of 15 to 60 days—a shorter period that provides less assurance that homes can sustain long-term compliance. Recent actions taken or proposed by HCFA to improve nursing home oversight can help make sanctions more of a deterrent against continued noncompliance, but on their own they are not enough to fully address the problems we identified. HCFA began a series of actions in response to our earlier report on California nursing homes and its own July 1998 report to the Congress summarizing a 2-year study of nursing home regulation.These actions address a number of problems we identified in our earlier report but do not resolve all of them or additional problems we have identified through our ongoing work. Further, weaknesses in HCFA’s management information systems will continue to limit HCFA’s ability to implement its initiatives and further strengthen its enforcement processes. In July 1998, HHS announced several actions that HCFA would take to toughen enforcement of nursing home regulations, particularly focusing on homes with serious and repeat deficiencies. The actions include plans to expand the definition of “poorly performing facility” to include more homes with repeat deficiencies that harmed residents. HCFA also directed that the results of an intervening survey, such as complaint investigations, be considered in determining whether a home should be designated as “poorly performing.” The actions also called for increased survey frequency for homes with the most chronic compliance problems and focusing enforcement efforts on nursing homes in chains that have a record of noncompliance with federal rules. With regard to the problems we have identified in this report, however, HCFA’s actions leave several issues unresolved. HCFA may be able to resolve one of the issues (the backlog of civil monetary appeals) if HHS’ budget request for additional staff positions is adopted. However, there are no actions under way with regard to two other issues—referring homes for sanction in all cases where deficiencies contributed to the death of a resident and better using the deterrent effect of termination from the Medicare and Medicaid programs (see table 8). HCFA initiatives also include a proposal to allow civil monetary penalties to be assessed on instances of noncompliance as an alternative to the number of days out of compliance. Since the proposed regulation had not been issued at the time we completed our review, we were not able to evaluate the extent, if any, that it could have on increasing use of civil monetary penalties. HCFA’s initiatives to focus more oversight on homes with serious and repeat noncompliance are likely to encounter obstacles due to three weaknesses in HCFA management information: the inability to centrally track enforcement actions, the lack of needed data on the results of complaint investigations, and the inability to identify nursing homes under common ownership. HCFA lacks a system that integrates federal and state enforcement information to help ensure that homes receive appropriate regulatory attention. Such a system would track key information about steps taken by HCFA offices and the states, such as verification that deficiencies were corrected or sanctions imposed. Although HCFA’s Online Survey, Certification, and Reporting (OSCAR) system was developed for this purpose, we learned that the system’s information was incomplete and inaccurate because states and HCFA have not consistently entered data into OSCAR. We found that the HCFA regions and states that we visited maintain and use their own systems, not OSCAR, to monitor enforcement actions. At the time of our initial inquiry, HCFA’s regional systems ranged from manual paper-based systems to complex computerized programs, and none of the four states’ tracking systems was compatible with OSCAR or the regional systems. This lack of management information makes it difficult for HCFA’s central office to coordinate and oversee the actions of its 10 regional offices, which are responsible for working with the states to administer the enforcement system. For example, officials in HCFA’s central office were not aware that regions were frequently late in imposing the sanction of denial of payment for new admissions on nursing homes out of compliance for 3 months—a sanction mandated under HCFA regulation. The four HCFA regional offices we visited often missed the time frame and sometimes did not impose the sanction at all. Of the 241 enforcement actions we reviewed, 85 involved situations where payment for new admissions was not stopped, even though homes had been out of compliance for more than 3 months. In 61 of the 85 cases, the regional office imposed denial of payment an average of 24 days after the deadline. In the remaining 24 cases, the region never denied payments at all, despite these homes being out of compliance for an average of 156 days. When we discussed this problem with responsible HCFA headquarters staff, they were unaware of the extent of this problem. If HCFA’s central office lacks adequate management information on the activities of its regional offices, it will be unable to monitor whether they are properly carrying out HCFA’s initiatives. A second area in which HCFA lacks adequate information is the results of complaint surveys. HCFA does not require states to cite violations of federal standards if the deficiencies were found during complaint surveys or to ensure that if such deficiencies are cited, they are reported to HCFA. One of the four states we reviewed based its decisions to refer homes to HCFA for sanctions solely on the results of the surveys. California did not report the results of complaint investigations to HCFA; instead it chose to deal with the homes under the state’s licensing authority. These practices leave HCFA without full information about nursing homes’ compliance status with Medicare and Medicaid standards. In September 1998, HCFA modified its guidance to states to stipulate that any federal deficiencies cited during complaint investigations must be used in determining if a nursing home is a “poorly performing facility.” The situation in California exemplifies how this lack of information limited HCFA’s ability to get a full picture of a home’s compliance with Medicare and Medicaid standards. California surveyors usually do not cite federal deficiencies when they find violations in complaint investigations. As a result, California does not recommend, and HCFA has no basis to impose, federal sanctions on deficient nursing homes resulting from complaint investigations. In many instances, substantiated complaint investigations disclosed severe deficiencies that were not part of the record referred to HCFA. For example, one home had 61 complaints between September 1995 and July 1998. State investigators substantiated violations in 30 of these complaints, some of which resulted in actual harm and placed residents in immediate danger, such as abuse of a resident by a staff member and failure to prevent or treat pressure sores. The state agency levied fines totaling $80,000 under its licensing authority but did not cite any federal deficiencies although many of its findings clearly violated Medicare and Medicaid standards. The home’s surveys did not document major problems. As a result, HCFA remained unaware of this home’s compliance problems. The third weakness with HCFA’s management information is the lack of data about homes with common ownership that are having severe compliance problems. Chain-owned nursing homes, a significant and growing segment of the nursing home industry, often cross state and regional boundaries. Effective oversight requires an information system that will be able to identify which chains have experienced severe compliance problems. However, HCFA tracks enforcement actions by individual facility provider number only. Consequently, regulators considering enforcement actions against a chain provider in one part of the state or country cannot easily determine the extent to which the problems they have identified are reflective of a broader pattern within the chain. To illustrate the impact of this lack of ownership information, we identified a chain provider and linked the records on the provider by three available sources: HCFA, states, and fiscal intermediaries. The linking showed that the chain provider had a disproportionate number of enforcement actions relative to other homes in the same states. In Texas, the provider owned about 11 percent of the nursing homes but accounted for over 18 percent of the state’s enforcement actions, including 25 percent of the state’s immediate jeopardy cases and 25 percent of the poorly performing facilities. In Michigan, where the chain owned eight facilities, six of the eight had a total of 27 separate enforcement actions. Despite multiple enforcement actions against these homes, Michigan and HCFA regional officials were unaware that the Michigan homes had a common owner or of the problem history of the owner’s facilities in Texas. In discussing this finding with HCFA officials, they noted that this example clearly demonstrated the need for information on common ownership. The inability to identify and track homes by chain could pose an immediate limitation on HCFA’s recent initiative to direct more enforcement efforts toward nursing home chains. To be successful in this initiative, HCFA needs to ensure that it can identify and track homes with common ownership. Despite reforms to ensure that nursing homes maintain compliance with federal quality standards, one-fourth of all homes nationwide continue to be cited for deficiencies that either caused actual harm to residents or carried the potential for death or serious injury. This pattern has not changed since the July 1995 reforms were implemented. Although the reforms equipped federal and state regulators with many alternatives and tools to help promote sustained compliance with Medicare and Medicaid standards, the way in which states and HCFA have applied them appears to have resulted in little headway against the pattern of serious and repeated noncompliance. Such performance may do little to dispel concerns over the health and safety of frail and dependent nursing home residents. The enforcement system we observed still sends signals to noncompliant nursing homes that a pattern of repeated noncompliance carries few consequences. HCFA’s recent actions, such as broadening the definition of a “poorly performing facility,” are a step in the right direction. However, four key problems we identified remain in need of attention. First, if the backlog of civil monetary penalties is not reduced, much of the deterrent effect of this sanction will continue to be lost. Second, weaknesses remain in the deterrent effect of termination, including the lack of a tie to “poorly performing facility” status for reinstated homes and the limited “reasonable assurance period” for monitoring terminated homes before reinstating them. Third, under HCFA guidance, states are not required to refer for sanction all homes with deficiencies that contribute to resident deaths. And finally, the changes do not address the need for HCFA to improve its management information system. HCFA’s ability to improve its oversight of nursing homes will depend heavily on whether it has the information to identify and monitor those homes that pose the greatest risk of harm. To strengthen its ability to ensure that nursing homes maintain compliance with Medicare and Medicaid quality-of-care standards, we recommend that the Administrator of HCFA take the following actions: Improve the effectiveness of civil monetary penalties. The Administrator should continue to take those steps necessary to shorten the delay in adjudicating appeals, including monitoring progress made in reducing the backlog of appeals. Strengthen the use and effect of termination. The Administrator should (1) continue Medicare and Medicaid payments beyond the termination date only if the home and state Medicaid agency are making reasonable efforts to transfer residents to other homes or alternate modes of care, (2) ensure that reasonable assurance periods associated with reinstating terminated homes are of sufficient duration to effectively demonstrate that the reason for termination has been resolved and will not recur, and (3) revise existing policies so that the pre-termination history of a home is considered in taking a subsequent enforcement action. Improve the referral process. The Administrator should revise HCFA guidance so that states refer homes to HCFA for possible sanction (such as civil monetary penalties) if they have been cited for a deficiency that contributed to a resident’s death. Develop better management information systems. The Administrator should enhance OSCAR or develop some other information system that can be used both by the states and by HCFA to integrate the results of complaint investigations, track the status and history of deficiencies, and monitor enforcement actions. We obtained comments on our draft report from HCFA and the four states that we visited. HCFA, California, Michigan, and Pennsylvania commented in writing (see app. II through app. V); Texas provided oral comments. In general, HCFA and the states concurred with our findings and recommendations and cited steps being taken to strengthen enforcement of Medicare and Medicaid requirements. They also suggested technical changes, which we included in the report where appropriate. HCFA commented that our findings underscore the need for the agency’s recent initiatives and will help sharpen the focus on areas that still need to be addressed. In its response (see app. II), HCFA generally agreed with our four recommendations and cited specific steps that it was planning to address them. HCFA concurred with our recommendation to shorten the delay in adjudicating appeals but also noted that it does not oversee the department’s appeals board. HCFA pointed out that the President’s fiscal year 2000 budget includes funds to double the number of administrative law judges that hear appeals for the board. We recognize that HCFA does not have administrative oversight of appeals board activities, but it does have the key role in monitoring and evaluating the effectiveness of civil monetary penalties as a sanction. Our recommendation was made with this latter role in mind. Regarding our recommendation for a better management information system, HCFA stated that a major system redesign is being undertaken. HCFA stated that the redesign was a long-term project but that it had plans for interim steps to make the existing system more useful to both state and HCFA offices. Also, concerning our recommendation to improve its referral process, HCFA indicated that it would reiterate to the states the need to use civil monetary penalties in serious cases of past noncompliance. HCFA also concurred with two specific steps that we recommended to strengthen termination as a sanction but did not concur with the third—using a longer reasonable assurance period before reinstating the home. HCFA pointed out that a long reasonable assurance period would not be appropriate if the home were terminated because it ran out of time correcting a minor deficiency that was corrected shortly after termination. This recommendation was based on evidence that a short reasonable assurance period appears to be given without attention to a home’s past performance. For example, four of the six reinstated homes in our sample were given reasonable assurance periods of 30 days or less. Most had repeated and serious deficiencies—those causing actual harm to patients. Our earlier work in California also showed that reinstated homes were often cited soon after reinstatement with new deficiencies that harmed residents. The intent of this recommendation is to help accomplish the stated purpose of the reasonable assurance provision—that there be some assurance that the cause for termination has been removed and will not recur. In response to HCFA’s comment, we revised the recommendation to clarify this intent. While in agreement with our recommendations, California’s comments recommended additional steps, such as enhanced funding to the states, that would help strengthen nursing home oversight (see app. III). Michigan’s comments largely focused on the implementation of initiatives taken in 1998 to correct problems that we discuss in the report. Michigan particularly highlighted its resident protection initiative, designed to monitor facility corrective action and performance both before and after the state determines the facility has achieved substantial compliance. It emphasizes such sanctions as directed plans of correction and state monitoring-–steps the homes must pay for themselves. We were aware of this initiative, which had become operational shortly before our visit in June 1998, and have revised the report where appropriate to reflect this initiative. However, data on its effectiveness in creating incentives for homes to maintain compliance with the standards were not available at the time we conducted our work. The results of future surveys will be needed to assess the initiative’s success. We also provided a copy of the report for review by the American Health Care Association (AHCA) and the American Association of Homes and Services for the Aging (AAHSA). AHCA officials expressed agreement with the report’s recommendations. They did express concern, however, about our sample size and methodology for selecting homes for detailed review. In selecting 74 homes that states had referred to HCFA for enforcement action, we focused on homes with serious and often repeat deficiencies. Our rationale in selecting these homes was if we found that such homes had been effectively dealt with, there might be some assurance that the system was at least addressing the worst problems. However, we did not find that the enforcement process was working as effectively as it should, even for these homes. Both AHCA and AAHSA also pointed out that deficiencies cited as actual harm (level G) on HCFA’s scope and severity grid may represent broad variation in seriousness and, by definition, refer to isolated situations that affect one or a very limited number of residents, with some citations appearing to be less serious than others. We acknowledge that there may be variation in the seriousness of actual harm violations but also found in the course of our work that a G-level citation most often involved serious resident care issues and at times did affect more than one resident. Copies of this report are also being sent to the Administrator of HCFA and other interested parties. If you or your staff have any questions about this report, please contact me or Kathryn Allen, Associate Director, at (202) 512-7114. This report was prepared by Margaret Buddeke, Peter Schmidt, Terry Saiki, Stan Stenersen, and Evan Stoll under the direction of Frank Pasquier. To determine the extent to which nursing homes maintain compliance with federal standards, we analyzed HCFA’s nationwide database of nursing home inspections—the Online Survey, Certification, and Reporting (OSCAR) system. This data system records the results of states’ recertification surveys in standard format. The format changed to recognize the deficiency scope and severity classifications made effective by the July 1995 final enforcement regulations. As a result, analysis of the scope and severity of nursing home deficiencies is inherently limited to periods after July 1995. Accordingly, the period of our analysis included surveys done from July 1995 through October 1998. We restricted our analysis to the 187 nursing home requirements for participation in Medicare and Medicaid categorized as related to patient care. Therefore, our analysis did not include data on compliance with safety code standards, such as fire protection and physical plant requirements. In addition to using these data to analyze the extent to which homes comply with the standards, we used the data to determine the most frequently occurring deficiencies and their relative severity. In order to compare nursing homes’ performance in achieving and maintaining compliance over time, we used OSCAR data to identify the earliest recertification survey performed after the regulations became effective compared to the homes’ most current surveys. To do this, we used data from a facility’s first survey during the period July 1, 1995, to December 31, 1996, which became part of the “base” period. Data from the latest survey since January 1, 1997, became part of the “current” period. For some nursing homes, there was an intervening survey, but we did not use data from these surveys. Although we did not thoroughly assess the reliability of the OSCAR database, for purposes of analyzing findings of nursing home recertification surveys, HCFA officials as well as private researchers who work with the database generally recognize the data as reliable. Even though the data are considered reliable for recertification deficiencies reported by the states, the extent to which they provide a consistent measure of the quality of care across states is unknown. Nevertheless, OSCAR data contain omissions that likely understate the extent of deficiencies found during other surveys by state inspectors. For example, in California, serious violations found during complaint investigations conducted by state inspectors were not routinely shown in OSCAR and appear to be understated in national data as well. To determine the extent to which the new sanctions contribute to nursing homes’ sustained compliance, we were unable to use OSCAR to perform a similar nationwide analysis. We found that OSCAR does not contain complete or reliable data on enforcement actions, such as the extent to which sanctions are imposed, and no other system exists that provides such nationwide data. For this reason, we relied on enforcement monitoring databases from the four HCFA regional offices we visited. Thus, to obtain information about the effectiveness of sanctions in deterring future noncompliance, we had to gather available data on enforcement actions from states and HCFA’s regional offices. In general, we used a two-step process. First, we looked at the extent to which states were referring cases of noncompliance to HCFA for enforcement sanctions. Second, we reviewed a sample of cases where states had recommended to HCFA that sanctions be imposed. We selected 4 of HCFA’s 10 regional offices—Philadelphia (region III), Chicago (region V), Dallas (region VI), and San Francisco (region IX)—for further review. We selected these four regions because they are geographically dispersed and contain about 55 percent of the nation’s nursing homes. Within each region, we selected one state—Pennsylvania, Michigan, Texas, and California, respectively—in which to gather additional information on specific providers and chains. We selected these four states because they had substantial numbers of nursing homes that accounted for about 23 percent of the nation’s nursing homes. At the states, we reviewed procedures for referring cases to HCFA; discussed these procedures with each state’s ombudsman; and where appropriate, reviewed selected case files to obtain a better understanding of procedures in place. At each of the four HCFA regional offices, we used HCFA regional enforcement records to identify nursing homes that had scope and severity designations of G or higher for which the state survey agencies had forwarded to HCFA survey files with recommendations for sanctions. From these records, we selected a sample of enforcement cases to review. The sample was not designed to be representative of the universe of enforcement actions. Rather, it was designed to give us a sufficient number of cases where different types of sanctions, including termination, were possible. We then reviewed these case files with an eye toward determining the implemented sanction’s strength or weakness as a deterrent to future noncompliance. Accordingly, we focused the sample on nursing homes, including known chain providers that had multiple referrals by state agencies to HCFA for enforcement or had been terminated. In all, we selected 74 separate nursing home providers. These providers accounted for 241 enforcement actions between July 1995 and October 1998 (see table I.1). These enforcement actions consisted of both recertification surveys and other abbreviated surveys (follow-up or complaint) where the state had referred cases to the HCFA regional office for sanctions. To determine the extent to which HHS’ actions were sufficient to ensure sanctions were applied in a timely and effective manner, we reviewed the actions announced by HCFA from July through November 1998 that concerned enforcement of nursing home standards. As such, proposed changes to the nursing home survey and certification process were outside the scope of our review. We also reviewed the extent to which adequate management information systems existed to support and oversee HCFA’s revised initiatives to strengthen its enforcement process. This included an examination of record formats in OSCAR, HCFA’s regional office tracking system, and state nursing home compliance systems. We also reviewed HCFA regulations, policies, and guidance; interviewed officials in HCFA’s headquarters and regional offices; and interviewed state survey agency officials. We also interviewed representatives from industry groups and advocacy groups and academic researchers. Our Office of the General Counsel, in consultation with HCFA attorneys, provided legal guidance on our interpretation of relevant OBRA 87 provisions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the enforcement of federal nursing home standards, focusing on: (1) national data on the existence of serious deficiencies in nursing home compliance with Medicare and Medicaid standards; and (2) the use of sanction authority for homes that failed to maintain compliance with the standards. GAO noted that: (1) GAO's work showed that while the Health Care Financing Administration (HCFA) has taken steps to improve oversight of nursing home care, it has not yet realized a main goal of its enforcement process--to help ensure that homes maintain compliance with federal health care standards; (2) surveys conducted in the nation's 17,000-plus nursing homes in recent years showed that each year, more than one-fourth of the homes had deficiencies that caused actual harm to residents or placed them at risk of death or serious injury; (3) the most frequent violations causing actual harm included inadequate prevention of pressure sores, failure to prevent accidents, and failure to assess residents' needs and provide appropriate care; (4) although most homes were found to have corrected the identified deficiencies, subsequent surveys showed that problems often returned; (5) about 40 percent of the homes that had such problems in their first survey during the period GAO examined (July 1995 to October 1998) had them again in their last survey during the period; (6) sanctions initiated by HCFA against noncompliant nursing homes were never implemented in a majority of cases and generally did not ensure that the homes maintained compliance with standards; (7) GAO's review of HCFA's survey data combined with GAO's analysis of 74 homes that had a history of problems showed a common pattern; (8) HCFA would give notice to impose a sanction, the home would correct its deficiencies, HCFA would rescind the sanction, and a subsequent survey would find that problems had returned; (9) the threat of sanctions appeared to have little effect on deterring homes from falling out of compliance again because homes could continue to avoid sanctions' effect as long as they kept correcting their deficiencies; (10) HCFA has some tools to address this cycle of repeated noncompliance but has not used them effectively; (11) fines are potentially a strong deterrent because they can be applied even if a home comes back into compliance; (12) however, the usefulness of civil monetary penalties is being hampered by a backlog of administrative appeals coupled with a legal provision that prohibits collection of the penalty until the appeal is resolved; (13) the sanction is often delayed for several years; and (14) GAO also found problems with several aspects of HCFA's policies for ensuring that sufficient attention is placed on homes that have serious deficiencies or a history of recurring noncompliance as well as policies for reinstating homes that have been terminated from the Medicare and Medicaid programs.
In the 1990s, the Coast Guard began an initial effort to modernize its aging assets that would allow it to successfully meet mission demands. After the September 11, 2001 terrorist attacks, the Coast Guard became a component of the newly established Department of Homeland Security (DHS), which resulted in an increase in mission demands. In order to meet this increase, the Coast Guard completed a Mission Needs Statement—the document that describes the mission(s) and needed capabilities to justify a given program—in 2005. The 2005 Mission Needs Statement compared the new assets for which the Coast Guard originally planned to procure—in 1996 prior to the creation of DHS—to replace its legacy assets to the demands of the new missions as laid out by the recently formed DHS. Based on the 2005 Mission Needs Statement, DHS approved a program of record in 2007—known as the Deepwater program—that provided the additional capability required. This effort was expected to last 25 years at a cost of $24.2 billion resulting in either the rebuilding or replacing of vessels and aircraft that were reaching the end of their expected service lives and were in deteriorating condition. Figure 1 shows some of the Coast Guard’s newer assets that are part of this broader modernization effort. In 2016, the Coast Guard revised its Mission Needs Statement in response to statutory requirements and committee report language, but this revision states it was not intended to provide details on the specific assets the Coast Guard needs to meet its mission requirements. Further, according to the Coast Guard, the 2016 update to the Mission Needs Statement is to provide a foundation for long-term investment planning that is to culminate with detailed modeling scenarios to evaluate the effectiveness of various fleet mixes, and inform the Coast Guard’s Capital Investment Plan. The 2016 revision, however, does not identify specific assets or fiscal resources necessary to meet the Coast Guard’s long-term mission requirements, as we had recommended in June 2014. Unlike the 2005 Mission Needs Statement, the 2016 version did not result in a new program of record for the Coast Guard’s recapitalization effort. However, since the original program of record in 2007, the Coast Guard’s recapitalization program has undergone changes as major acquisition programs have been completed and/or modified in response to affordability concerns. Figure 2 depicts the Coast Guard’s 2007 recapitalization program of record and the current 2017 program of record. The Coast Guard is currently procuring three new cutter classes that will have more capability than the legacy assets they are intended to replace. The FRC will replace the legacy Island Class Patrol Boat, the OPC will replace both classes of the legacy Medium Endurance Cutter (210-foot class and 270-foot class), and the NSC will replace the legacy High Endurance Cutter. As we reported in June 2014, several of the Coast Guard’s newest asset classes are generally demonstrating improved mission performance compared to the assets they are replacing, according to Coast Guard officials who operate these assets. Specifically, the FRC and NSC have greater fuel capacity and efficiency, engine room and boat launch automation, handling/sea-keeping, and food capacity, all of which increase endurance and effectiveness. In addition, the FRC and NSC both have a stern ramp that allows them to launch and recover the cutters’ small boats more safely and in a fraction of the time that the Island Class Patrol Boats and High Endurance Cutters require, which allows the cutters to more efficiently and effectively conduct missions. The OPC is also expected to provide increased capabilities compared to the Medium Endurance Cutter it is replacing. Table 1 provides comparison information on selected Coast Guard legacy and new surface assets. The Coast Guard commissioned its first FRC in 2012 and, as of April 2017, has received 23 of these vessels. The Coast Guard exercised a contract option for detail design for the OPC in September 2016, and there are separate options for the production of each cutter currently under contract. The Coast Guard anticipates receiving the first vessel in fiscal year 2021, with deliveries each year through 2035 when the program is scheduled to achieve full operating capability. Additionally, since 2008, the Coast Guard has received a total of 6 NSCs, with 3 in various stages of construction. Due to its improved capabilities, the NSC has been able to complete longer deployments, which has in part resulted in more successful drug interdictions than the legacy asset it replaces. The Coast Guard is also updating and acquiring new aviation assets that have increased capabilities compared to the legacy assets they are replacing. For example, the fleets of H-65 helicopters are being upgraded to allow for greater reliability, maneuverability, and interoperability between the H-65 and other government assets. In addition, the Coast Guard restructured its HC-144A acquisition program in 2014 to accommodate 14 C-27J aircraft it received from the U.S. Air Force. The Coast Guard plans to use these twin-engine propeller-driven aircraft to conduct all types of Coast Guard missions, including search and rescue and disaster response. As we reported in June 2014, officials at Air Station Miami stated that since they began regularly operating the HC- 144A in fiscal year 2011, the aircraft has had a significant role in improving the effectiveness of the Coast Guard’s counterdrug and alien migrant interdiction operations. However, the HC-144A only fully met three of its seven key performance parameters during initial operational testing, but the Coast Guard plans to conduct additional tests in fiscal year 2017 to demonstrate additional key performance parameters. As we reported in March 2015, the Coast Guard faces several challenges to making the C-27Js operational, including purchasing spare parts and a lack of access to the manufacturer’s technical data that are required to make modifications to the aircraft’s structure to incorporate, among other things, the radar. The Coast Guard is currently in the process of transitioning to a new mission system on all of its fixed-wing aircraft, which is a system currently used by the U.S. Navy and DHS’s Customs and Border Protection. The new mission system is intended to enhance operator interface and sensor management, as well as replace obsolete equipment, which is to enable more commonality between the fixed-wing fleet. The Coast Guard has not been able to take full advantage of increased capabilities of the FRC and NSC due to maintenance issues that have limited their time available for operations. As we reported in March 2017, while over the past few years both the FRC and NSC met their minimum mission capable targets on average, which are 48 percent for the FRC and 49 percent for the NSC, our analysis of a more recent period—from October 2015 to September 2016—found that both cutters fell below their minimum targets due to needed increased depot-level maintenance. See table 2. According to Coast Guard officials, the FRC’s decrease in monthly mission capable rates below its minimum target is primarily because of a phased warranty repair drydock period that was not initially anticipated. The average warranty repair drydock period will last approximately 15 weeks, with at least one FRC not mission capable due to depot-level maintenance at all times from January 2016 to November 2019. These drydocks were triggered by continuing structural concerns and problems with equipment that was installed during production, including continued failures with the main diesel engine. Given that only a few FRCs have completed the warranty drydock to date, it is difficult to determine whether the overall fleet’s mission capable rate will meet its target range once the drydocks are completed. As we noted in our March 2017 report, while the FRC’s decrease is attributable to the unanticipated drydocks, the NSC’s mission capable rates are influenced by a roughly 2-year anticipated post-delivery maintenance period called the post shakedown availability, which is scheduled for each newly delivered NSC. During this shakedown period, the NSC will be rendered not mission capable due to depot-level maintenance for a majority of its time. For example, from January 2015 until September 2016, the NSC Hamilton spent 70.9 percent of its time in depot-level maintenance, and the NSC James spent 82.6 percent of its time in depot-level maintenance from September 2015 to September 2016. With only five NSCs in operation as of September 2016, having two cutters spend the majority of their time not mission capable due to depot- level maintenance is negatively affecting the overall fleet’s mission capable rates. This will continue as the Coast Guard introduces new NSCs into the fleet and the last cutter completes its 2-year post shakedown period—scheduled for 2022 as the ninth cutter is scheduled for delivery in 2020. While the first three NSCs achieved their mission capable rate targets on average from January 2014 to September 2016, it is uncertain if the overall fleet mission capable rate will increase once all NSCs complete their post shakedown availabilities. In addition to the negative effect that depot-level maintenance is having on both the FRC and NSC’s mission capable rates, our March 2017 report found that both cutter classes have been plagued with equipment failures resulting in lost operational days or a partially mission capable status. This means that the cutters are either not able to or are conducting operations in a limited capacity. The main diesel engines on both cutters, which were manufactured by the same vendor, were among the equipment systems that resulted in the most lost operational days from 2014 through 2016 and have been problematic since the cutters became operational. Problems with the FRC’s engine resulted in roughly 355 days spent not mission capable due to maintenance. However, the FRC’s warranty clause has covered several engine problems and, according to the FRC’s contracting officer, has avoided about $77 million in potential maintenance costs for the Coast Guard it otherwise would have needed to pay as of August 2016. Furthermore, the FRC’s contracting officer stated that as of October 2016, all of the 18 operational FRCs have undergone various corrective repairs on their main diesel engines, including replacing engines on 6 of the cutters. Similar to the FRC, the NSC’s engines have experienced problems and, as we found in January 2016, the engines overheat in waters above 74 degrees Fahrenheit, which constitutes a portion of the NSC’s operating area given that they are intended to be deployed worldwide. This can cause the cutters to operate 2 to 4 knots below their top speed of 28 knots. As a result, the Coast Guard has been forced to operate the NSCs at reduced speeds during some missions, such as counter drug missions, where reaching maximum speeds would be operationally useful. The NSC’s inability to achieve top speed in warm waters has also inhibited the cutters’ ability to complete their regularly scheduled full power trials, which are periodic tests of the propulsion plant operated at maximum rated power. The results of these tests advise operators and maintenance personnel of the cutter’s full power performance characteristics and can provide the basis for maintenance activity. Without these tests, the Coast Guard lacks sufficient information that could be useful for assessing propulsion systems and planning maintenance. Further, as we reported in March 2017, the Coast Guard is conducting design changes for some critical systems post-delivery for the NSC in order to minimize the cost increase of the extra work and to adhere to the cutters’ production schedule. One such design change involves the NSC’s gantry crane, which was not designed for a maritime environment and is inadequately sealed to prevent water intrusion. This has led to accelerated corrosion and the need for excessive repairs that are not considered suitable over the NSC’s life cycle. The design change to replace the gantry crane was initiated in January 2010 and the new crane was approved for fleet-wide replacement. However, all of the remaining NSCs will be built with the original gantry crane installed and then replaced during their post-shakedown periods. During the work for our March 2017 report, Coast Guard officials stated that no formal analysis was developed or documented to determine whether a design change should be installed during production or post- delivery. Instead, they used the professional judgment of Coast Guard and shipyard officials to determine the most cost efficient timing of when to install design changes. Keeping the NSC delivery dates on schedule was one of the primary reasons officials gave for not installing some design changes during production. Given that the program has been aware of these design changes for many years, the Coast Guard had an opportunity to install the design changes during production instead of during the post-delivery period. We concluded that by not installing the design changes during production, the Coast Guard will need to maintain the original equipment installed during production for all NSCs, including the ninth NSC (the separate production contract for which was awarded in December 2016), and then later conduct retrofits after accepting delivery of the cutters. This will necessitate the installation of systems with known defects or deficiencies during production only to replace such systems later, requiring maintenance on some of these systems until the retrofits are complete. In our March 2017 report, we therefore recommended that the Coast Guard update the Joint Surface Engineering Change Process Guide to require a documented cost analysis to provide decision makers with adequate data to make informed decisions regarding the expected costs and when it is most cost effective to install design changes. The Coast Guard concurred with our recommendation and plans to incorporate a documented cost analysis requirement into an update to its guidance by December 31, 2017. As we found in June 2014, there are gaps between what the Coast Guard estimates it needs to carry out its program of record for its major acquisitions and what it has traditionally requested and received. This issue has continued since we issued our report. For example, senior Coast Guard officials have stated a need for over $2 billion per year, but the President’s budget requested $1.2 billion for fiscal year 2018, after asking for $1.1 billion in fiscal year 2017. In an effort to address the funding constraints it has faced annually, the Coast Guard has been in a reactive mode, delaying and reducing its capability through the annual budget process by delaying new acquisitions, and does not have a plan to realistically set forth affordable priorities. For instance, the Coast Guard has realized delays in many of its programs but, in particular, is facing a gap in the capability provided by its Medium Endurance Cutter fleet, which will likely begin reaching the end of their service lives before the OPCs are operational. In 2014, Coast Guard, DHS, and Office of Management and Budget officials acknowledged that the Coast Guard could not afford to recapitalize and modernize its assets in accordance with its current plan at current funding levels. While efforts have been underway to address this issue for several years, the Coast Guard has made little progress in improving the affordability of its acquisition portfolio. As a result, the Coast Guard faces significant capability gaps if funding increases do not materialize. Since 2011, we have recommended that DHS and the Coast Guard take several actions to gain an understanding of what the Coast Guard needs to meet its missions within its likely acquisition funding levels. These key actions included: 1) the Coast Guard conducting a comprehensive portfolio review across all its acquisitions to develop revised baselines that meet mission needs and reflect realistic funding scenarios and 2) the Coast Guard developing a 20-year plan that identifies all necessary recapitalization efforts and any fiscal resources likely necessary to complete these efforts. Following our September 2012 report, Congress asked the Coast Guard to examine its mission needs across its portfolio of assets. In 2016, the Coast Guard revised its 2005 Mission Needs Statement, which provides a basic foundation for long-term investment planning that is to serve as the basis for evaluating the effectiveness of various fleet mixes, and inform the Coast Guard’s Capital Investment Plan—its key portfolio planning tool. However, the 2016 Mission Needs Statement did not identify specific assets the Coast Guard needs to achieve its missions, nor did it update the annual hours it needs from each asset class to satisfactorily complete its missions. In line with our past recommendation from September 2012, the Coast Guard is currently in the process of updating its fleet mix analysis to detail the assets it needs to meet requirements, but this analysis is not planned to be finalized until the 2019 President’s budget is submitted. Once completed, this analysis could serve as a foundation for understanding potential trade-offs that could be made across the Coast Guard’s portfolio of acquisitions to better meet mission needs within realistic funding levels. In June 2014, we also recommended that the Coast Guard develop a 20- year fleet modernization plan that identifies all acquisitions necessary for maintaining at least its current level of service and the fiscal resources necessary to build these assets. Such an analysis would facilitate a full understanding of the affordability challenges facing the Coast Guard while it builds the OPC. DHS concurred with the recommendation, but it is unclear when the Coast Guard plans to complete this effort. As we reported in April 2017, the full operational capability date has been delayed for several Coast Guard acquisition programs. For example, the FRC program experienced a delay of more than 4 years because affordability constraints necessitated that it reduce the quantity of cutters procured annually from a proposed 6 cutters to 4 cutters per year. In addition, the Coast Guard delayed the OPC procurement by 14 years from the 2007 program of record to develop the requirements for this cutter and conduct a competition, while prioritizing acquisition of the NSC. Figure 3 shows the proposed full operational capability date as of the original 2007 program of record, the first DHS-approved baseline for each program, and the current baseline. As we reported in July 2012, the Coast Guard’s delay in the OPC acquisition has resulted in potential mission capability shortfalls as the condition of the legacy Medium Endurance Cutters further declines. The 210-foot Medium Endurance Cutters—originally built in the 1960s—will be nearly 60 years old by the time they are replaced and have already exceeded their expected service lives. In September 2014, the Coast Guard conducted refurbishment work for the Medium Endurance Cutters (both the 210-foot and 270-foot) that could provide an additional 5, 10, or 15 years of service. However, senior Coast Guard officials responsible for these efforts at the time indicated that the estimate of up to 15 years was optimistic and that the refurbishment provided needed upgrades to the Medium Endurance Cutters, but was not designed to further extend the cutters’ useful lives. As depicted in figure 4, even with the most optimistic projection for the current extended useful life of the Medium Endurance Cutters, we found as of May 2017 that there would be a gap before the planned OPCs are operational, which the Coast Guard does not expect to begin until at least 2022. As we reported in June 2014 and, more recently in our April 2017 assessment of DHS major acquisition programs, the Coast Guard faces affordability challenges that could result in additional capability gaps. The upcoming OPC procurement, for which the planned acquisition costs are $12.1 billion—making it the largest Coast Guard acquisition program to date—is going to create additional strain on the Coast Guard’s acquisition budget. According to the Coast Guard, the OPC is its top priority and, as such, it will be funded before other assets, such as the River Buoy Tenders and helicopters. However, if the Coast Guard’s acquisition budget remains at its current levels, the funding remaining for other assets will be very limited. Beginning in September 2018, the OPC will absorb about two-thirds of the Coast Guard’s annual acquisition funding until 2032 based on recent funding history. The Coast Guard initially plans to fund one OPC per year and eventually two OPCs per year until all 25 planned cutters are delivered. If the OPC experiences cost growth during development, the acquisition funding available for other programs could be reduced if the program attempts to meet its current delivery schedule, or the funding constraints could be prolonged if the delivery schedule for the OPC is extended. Any remaining Coast Guard acquisition programs will have to compete for acquisition funds not used for the OPC. For instance, the Coast Guard must also recapitalize other assets such as the polar icebreakers—to alleviate a current capability gap—and refurbish other legacy vessels such as its fleet of river buoy tenders, as these assets continue to age beyond their expected service lives and, in some cases, have been removed from service without a replacement. The following are some examples that we identified in our June 2014 report of Coast Guard assets that will likely require some level of funding while the OPC is in development: Icebreakers—The Coast Guard currently has a gap in its heavy icebreaking capability and has previously been without any heavy polar icebreakers when the legacy vessels were in disrepair from 2010 to 2013. In 2014, the Coast Guard returned one of these heavy icebreakers back to service, but still has one fewer heavy icebreaker than it has historically operated and two fewer than it needs, according to the Coast Guard’s June 2013 heavy icebreaker mission need statement. The 2017 President’s budget requested $147.6 million to begin funding the first heavy icebreaker—with preliminary estimates of about $1 billion. The Coast Guard’s preliminary estimates indicate that the first new heavy icebreaker could be available for operations in fiscal year 2023. River Buoy Tenders—The Coast Guard fleet of river buoy tenders was mostly constructed between the 1950s and the 1970s and are in need of replacing. The Coast Guard plans to initiate a program to begin development and construction of new vessels to replace the legacy assets, however, no date has been provided as to when this effort will begin. Service Life Extension for the 270-foot Medium Endurance Cutters— The Coast Guard plans to conduct a service life extension on the 270-foot Medium Endurance Cutters to help keep the cutters operational until the OPCs are delivered. Coast Guard officials said they have no plans to conduct service life extension work on the 210-foot Medium Endurance Cutters. H-60 and H-65 Helicopter Fleets—The Coast Guard is planning to conduct a service life extension of both the H-60 and H-65 fleets. Extending these aircraft into the mid-2030s will enable the Coast Guard to potentially complete the OPC acquisition before starting a recapitalization effort for its rotary fleet. Regardless of the future path, significant acquisition dollars will be required to maintain annual flight hours for the next 20 years, according to Coast Guard program officials. While the Coast Guard faces affordability challenges with these programs, it has also taken steps to mitigate affordability challenges in other programs. For example, the 2007 program of record planned to acquire 45 unmanned aircraft systems at a total cost of $503 million. However, the Coast Guard truncated this program and now plans to outfit the NSC fleet with six unmanned aircraft systems for $104 million. The Coast Guard is currently in the process of demonstrating a small unmanned aircraft system on the NSC and, according to officials, plans to issue a request for proposals from industry later this year to outfit the rest of the NSC fleet. In conclusion, as the Coast Guard continues to field new or refurbish existing cutters and aircraft with improved capabilities, it is important that the Coast Guard plan for the affordability of its future portfolio so that it can minimize the capability gaps that can occur as legacy assets reach the end of their service lives before the new assets become operational. We have made several recommendations in recent years intended to help the Coast Guard plan for these future acquisitions and the difficult trade- off decisions that it will likely face. If the Coast Guard fully implements these recommendations, it will likely position itself to provide decision makers with critical knowledge needed to prioritize its constrained acquisition funding. Without these efforts, the Coast Guard will continue, as it has in recent years, to plan its future acquisitions through the annual budgeting process, which has led to delayed and reduced capabilities. A thorough plan regarding the affordability of its future acquisitions would provide timely information to decision makers on how to spend scarce taxpayer dollars in support of a modern, capable Coast Guard fleet. Chairman Hunter, Ranking Member Garamendi, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions. If you or your staff have any questions about this statement, please contact Marie A. Mak, (202) 512-4841 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Richard A. Cederholm, Assistant Director; Peter W. Anderson; Erin Butkowski; John Crawford; Laurier Fish; and Roxanna T. Sun. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In order to meet its missions of maritime safety, security, and environmental stewardship, the Coast Guard, a component within the Department of Homeland Security (DHS), employs a variety of surface and air assets, several of which are approaching the end of their intended service lives. As part of its efforts to modernize its surface and air assets (an effort known as recapitalization), the Coast Guard has begun acquiring new vessels, such as the National Security Cutter, Fast Response Cutter, and a number of air assets, and developing the Offshore Patrol Cutter. Despite the addition of new assets, concerns surrounding capability and affordability gaps remain. This statement addresses (1) the capabilities provided by the newer Coast Guard assets, (2) maintainability and equipment challenges for the new cutters, and (3) the overall affordability of the Coast Guard's acquisition portfolio. This statement is based on GAO's extensive body of work examining the Coast Guard's acquisition efforts spanning several years, including the March 2017 report on the NSC and FRC's maintainability. The Coast Guard is currently procuring three new cutter classes that are intended to have more capability than the legacy assets they are replacing. In particular, the National Security Cutter (NSC) and the Fast Response Cutter (FRC) are generally demonstrating improved mission performance (see figure). Both cutters have greater fuel capacity and efficiency and handling/sea-keeping than the legacy assets they replace, all of which increase endurance and effectiveness. Another new asset—the Offshore Patrol Cutter (OPC)—is also expected to provide increased capabilities compared to the Medium Endurance Cutter it is replacing, such as the ability to conduct longer patrols. The Coast Guard, however, has not been able to take full advantage of the FRC's and NSC's capabilities because of maintenance and equipment issues limiting their time available for operations. GAO found in March 2017 that while both cutters met their minimum mission capable targets on average over the long-term, more recently—from October 2015 to September 2016—they fell below their minimum targets due to needed increased depot-level maintenance. Both cutters have also been plagued by problems with critical equipment, such as the diesel engines, which have contributed to lost operational days. In June 2014, GAO found gaps between the funding amounts the Coast Guard estimates its major acquisitions need and what it has requested. This has continued. For example, senior Coast Guard officials peg acquisition needs at over $2 billion per year, but the President's budget requested $1.2 billion for fiscal year 2018. In an effort to address funding constraints, the Coast Guard delayed new acquisitions through the annual budget process, but lacks a long-term plan to set forth affordable priorities. As a result of these issues, it is facing a gap in the capability provided by its Medium Endurance Cutters, which are slated to reach the end of their service lives before all the OPCs are operational. GAO recommended in 2014 that the Coast Guard develop a 20-year fleet modernization plan that identifies all acquisitions needed to maintain the current level of service—aviation and surface—and the fiscal resources needed to buy the identified assets. DHS concurred with the recommendation, but it is unclear when the Coast Guard will complete this effort. GAO is not making recommendations in this statement but has made recommendations to the Coast Guard and DHS in the past regarding recapitalization and the specific assets involved, including that the Coast Guard develop a 20-year fleet modernization plan that identifies all acquisitions needed to maintain the current level of service and the fiscal resources needed to acquire them. DHS agreed with this recommendation.
GPRAMA requires OMB to coordinate with agencies to develop long- term, outcome-oriented federal government priority goals for a limited number of crosscutting policy areas and management improvement areas every 4 years. Furthermore, with the submission of the fiscal year 2013 budget, GPRAMA required OMB to identify a set of interim priority goals.interim CAP goals, 9 of which were related to crosscutting policy areas and 5 of which were management improvement goals. The President’s 2013 budget submission included a list of 14 The CAP Goal Leader. As required by GPRAMA, each of the interim CAP goals had a goal leader responsible for coordinating efforts to achieve each goal. CAP goal leaders were given flexibility in how to manage these efforts, and were encouraged by OMB to engage officials from contributing agencies by leveraging existing inter-agency working groups, policy committees, and councils. For information on the position of the goal leader and the interagency groups used to engage officials from agencies contributing to each interim CAP goal, see figure 1. For more information on the interagency groups used to engage agency officials in efforts related to each goal, see appendix III. According to OMB and PIC staff, because CAP goal leaders were responsible for managing efforts related to the achievement of the goals as part of a larger portfolio of responsibilities, staff from the PIC, OMB, and—in some cases—from agencies with project management responsibilities, provided additional capacity for coordinating interagency efforts and overseeing the process of collecting, analyzing, and reporting data. Specifically, PIC staff provided logistical support, assisting with the regular collection of data, updates to Performance.gov, and the development of CAP goal governance structures and working groups. They also provided support in the area of performance measurement and analysis. For example, PIC staff supported the Exports goal leader by informing discussions of how to measure the success and impact of export promotion efforts, providing expertise in the development and selection of appropriate performance measures, and assisting in the collection and analysis of relevant data. Progress Reviews. GPRAMA also requires that the Director of OMB, with the support of the PIC, review progress towards each CAP goal with the appropriate lead government official at least quarterly. Specifically, the law requires that these should include a review of progress during the most recent quarter, overall trends, and the likelihood of meeting the planned level of performance. As part of these reviews, OMB is to assess whether relevant agencies, organizations, program activities, regulations, tax expenditures, policies, and other activities are contributing as planned to the goal. The law also requires that OMB categorize the goals by risk of not achieving the planned level of performance and, for those at greatest risk of not meeting the planned level of performance, identify strategies for performance improvement. In an earlier evaluation of the implementation of quarterly performance reviews at the agency level, we found that regular, in-person review meetings provide a critical opportunity for leaders to use current data and information to analyze performance, provide feedback to managers and staff, follow up on previous decisions or commitments, learn from efforts to improve performance, and identify and solve performance problems. As part of this work we also identified nine leading practices that can be used to promote successful performance reviews at the federal level. To identify these practices, we conducted a review of relevant academic and policy literature, including our previous reports. We refined these practices with additional information obtained from practitioners at the local, state, and federal level who shared their experiences and lessons learned. Nine Leading Practices That Can Be Used to Promote Successful Performance Reviews Leaders use data-driven reviews as a leadership strategy to drive performance improvement. Key players attend reviews to facilitate problem solving. Reviews ensure alignment between goals, program activities, and resources. Leaders hold managers accountable for diagnosing performance problems and identifying strategies for improvement. There is capacity to collect accurate, useful, and timely performance data. Staff have skills to analyze and clearly communicate complex data for decision making. Rigorous preparations enable meaningful performance discussions. Reviews are conducted on a frequent and regularly scheduled basis. Participants engage in rigorous and sustained follow-up on issues identified during reviews. Reporting Requirements. In addition to requiring quarterly reviews, GPRAMA requires that OMB make information available on “a single website” (now known as Performance.gov) for each CAP goal on the results achieved during the most recent quarter, and overall trend data compared to the planned level of performance. In addition, information on Performance.gov is to include an assessment of whether relevant federal organizations, programs, and activities are contributing as planned, and, for those CAP goals at risk of not achieving the planned level of performance, information on strategies for performance improvement. New CAP Goals. As required by GPRAMA, in March 2014, OMB announced the creation of a new set of CAP goals in the fiscal year 2015 budget. It then identified 15 CAP goals with 4-year time frames on Performance.gov—7 mission-oriented goals and 8 management-focused goals. Five goal areas—Cybersecurity; Open Data; Science, Technology, Engineering, and Mathematics (STEM) Education; Strategic Sourcing; and Sustainability (renamed Climate Change (Federal Actions))—were carried over from the set of interim CAP goals, while the other 10 are new goal areas. OMB stated on Performance.gov that more detailed action plans for each of the goals, including specific metrics and milestones that will be used to gauge progress, will subsequently be released. The new CAP goals will also have co-leaders; one from an office within the Executive Office of the President (EOP) and one or more from federal agencies. According to OMB staff, this change was made to ensure that CAP goal leaders can leverage the convening authority of officials from the EOP while also drawing upon expertise and resources from the agency level. GPRAMA Requirements for Establishing Planned Performance for CAP Goals GPRAMA requires the Director of OMB to establish, in the annual federal government performance plan, a planned level of performance for each CAP goal for the year in which the plan is submitted and the next fiscal year, as well as quarterly performance targets for the goals. GPRAMA Requirements for Reporting CAP Goal Performance Information GPRAMA requires the Director of OMB to publish on Performance.gov information about the results achieved during the most recent quarter and trend data compared to the planned level of performance for each CAP goal. OMB released the federal government performance plan on Performance.gov concurrently with the fiscal year 2013 budget submission that identified the 14 interim CAP goals. The information on Performance.gov included a goal statement for each of the interim goals that established an overall planned level of performance. During the two- year interim goal period, OMB addressed the requirement to report on results achieved during the most recent quarter for each of the CAP goals by publishing 5 sets of quarterly updates to the interim CAP goals on Performance.gov. The first set of updates, for the fourth quarter of fiscal year 2012, was published in December 2012 and the final set of updates, for the fourth quarter of fiscal year 2013, was published in February 2014. These documents described general accomplishments made to date, specific actions completed, or both. The updates to the Broadband CAP goal, for instance, included short descriptions of general progress made towards each of the five strategies identified for achieving the goal, as well as specific milestones accomplished. The quarterly updates did not, however, consistently identify required interim planned levels of performance and data necessary to indicate progress being made toward the CAP goals. Updates to eight of the goals included quarterly, biannual, or annual data that indicated performance achieved to date toward the target identified in the goal statement. Three of the eight goals (Cybersecurity, Energy Efficiency, and Strategic Sourcing) also contained the required annual or quarterly targets that defined planned levels of performance, which allowed for an assessment of interim progress. For example, the Cybersecurity goal’s updates stated that the goal would not be met within its established time frame, and provided quarterly performance data compared to quarterly targets for the entirety of the goal period to support the statement. In contrast, the updates for the other five goals did not contain annual or quarterly targets, which made it difficult to determine whether interim progress towards the goals’ overall planned levels of performance was being made. For example, updates to the Exports goal included data on the total amount of U.S. exports by quarter for calendar years 2012 and 2013 but did not include a target level of performance for those years or quarters. Therefore, it was unclear whether the goal’s overall planned level of performance of doubling U.S. exports by the end of 2014 is on track to be met. Furthermore, updates to six interim CAP goals did not include trend data to indicate progress being made towards the goals’ overall planned levels of performance. Figure 2 below identifies the frequency with which data on CAP goal performance were reported, as well as the overall performance CAP goal leaders reported making compared to the goal’s planned level of performance through the fourth quarter of fiscal year 2013. Through our review of information on Performance.gov and interviews with managers of the six interim CAP goals that did not report any data on progress towards the stated goal, we identified reasons that included: Lack of quantitative planned level of performance (targets). The Entrepreneurship and Small Business CAP goal lacked a quantitative performance target. The quarterly updates to the goal explained that efforts were focused on the goal’s 10 sub-goals. Most of these sub- goals, however, also lacked quantitative performance targets. The deputy goal leader told us that some of the sub-goals did not have quantitative targets by design, as goal managers thought it more appropriate to use qualitative milestones to track progress towards them. The quarterly updates to the “Streamline immigration pathways for immigrant entrepreneurs” sub-goal lacked a quantitative target but had a range of qualitative milestones. For example, the Department of Homeland Security and the Department of State established a milestone to identify reforms needed to ease the application and adjudication processes for visas available to certain immigrant entrepreneurs. Unavailable data. Some CAP goal managers told us that the data needed to assess and report progress toward their goals’ performance targets were unavailable or not yet being collected. For example, a manager of the Job Training CAP goal told us that staff had not established a baseline number of participants served by federal job training programs against which progress towards the goal could be tracked. In addition, managers of the Real Property CAP goal told us that they did not have data available for tracking progress toward the goal of holding the federal real property footprint at its fiscal year 2012 baseline level. Where key data were not reported, some goal managers took actions to obtain previously unavailable data or developed an alternative approach for assessing progress. Job Training CAP Goal. The first quarterly update for the Job Training CAP goal, published on Performance.gov in December 2012, stated that federal agencies were surveyed to compile a list of all job training programs in the federal government, including the number of participants served by those programs, and that a working group was developing a baseline for measuring progress towards the goal of preparing 2 million workers with skills training by 2015. A goal manager told us that the deputy goal leader and staff from the PIC gathered baseline information for most of the programs within the scope of the CAP goal, but that they were unable to complete the efforts by the end of the goal period. Real Property CAP Goal. Managers of the Real Property CAP goal told us that they worked to establish a baseline and metrics for measuring future performance and would be able to report on progress after the goal period ended. Closing Skills Gaps CAP Goal. A manager of the Closing Skills Gaps goal told us that the goal’s managers decided early on that it did not make sense for each of the goal’s identified mission-critical occupations to have the same skills gaps reduction target. Instead, managers of the goal’s sub-goals identified efforts to reduce skills gaps in their specific occupations. They identified an individual targeted level of performance for that effort and collected and reported data on progress made towards the target. For instance, managers of the Acquisitions sub-goal established a target for increasing the certification rate of GS-1102 contract specialists to 80 percent. The final quarterly status update to the Closing Skills Gaps CAP goal reported that the target was met and the certification rate increased to 81 percent. Veteran Career Readiness CAP Goal. The leader of the Veteran Career Readiness CAP goal told us that efforts were made to collect data to assess the veteran employment situation. For instance, she said that an interagency data-gathering working group reviewed sources of available data, integrated those data – such as the unemployment rate for various sub-populations of veterans – into dashboards for senior leadership review, and made proposals to improve data availability. In addition, the Army led a working group to develop a more complete picture of veterans receiving unemployment compensation. She said that these and other efforts led to a concerted effort to improve the availability of data, and to develop and implement metrics measuring career readiness and attendance in a veteran career transition assistance program. However, no data to track progress towards the overall goal were reported during the interim goal period. As we have previously reported, no picture of what the federal government is accomplishing can be complete without adequate performance information. However, OMB and CAP goal leaders did not identify interim planned levels of performance or targets for most of the interim CAP goals. Furthermore, they established a number of CAP goals for which data necessary to indicate progress towards the goal could not be reported. In so doing, they limited their ability to demonstrate progress being made towards most of the CAP goals and ensure accountability for results from those who helped to manage the goals. GPRAMA Requirement for Establishing Milestones GPRAMA requires the Director of OMB to establish, in the federal government performance plan, clearly defined quarterly milestones for the CAP goals. GPRAMA Requirement for Reporting on Contributions towards Cross-Agency Priority Goals GPRAMA requires that OMB identify the agencies, organizations, program activities, regulations, tax expenditures, policies, and other activities that contribute to each CAP goal on Performance.gov. It also requires OMB to make available on the website an assessment of whether relevant agencies, organizations, program activities, regulations, tax expenditures, policies, and other activities are contributing as planned. In the status updates that were published on Performance.gov, managers of each of the CAP goals reported the general approaches, strategies, or specific initiatives being employed to make progress towards the achievement of the goal, as well as the departments, agencies, and programs that were expected to contribute to goal achievement. For example, the leader of the Science, Technology, Engineering, and Mathematics (STEM) Education CAP goal identified a number of general strategies for making progress towards the achievement of its goal of increasing the number of graduates in STEM subjects by 1 million over the next 10 years, such as “Address the mathematics preparation gap that students face when they arrive at college” and “Identifying and supporting the role of technology and innovation in higher education.” In addition, the goal leader identified a number of programs and goals within four departments and agencies that were likely to contribute in part or in whole to the goal. Figure 3 below illustrates how this information was presented in the update to the STEM Education CAP goal for the fourth quarter of fiscal year 2013. In a May 2012 report on our work related to the CAP goals, we noted that information on Performance.gov indicated additional programs with the potential to contribute to each of the CAP goals may be identified over time. We then recommended that OMB review and consider adding to the list of CAP goal contributors the additional departments, agencies, and programs that we identified, as appropriate. OMB agreed with the recommendation, and in the quarterly updates to the CAP goals published in December 2012 and March 2013, OMB added some of the departments, agencies, and programs we identified in our work to some CAP goals’ lists of contributors. For example, we had noted that 12 member agencies of the Trade Promotion Coordinating Committee had not been identified as contributors to the Exports CAP goal. OMB added additional information about contributors to the Exports goal in the update published in December 2012. During our review, in some cases CAP goal managers told us about additional organizations and program types that contributed to their goals, but which were not identified on Performance.gov or in our previous report. For example, the leader of the STEM Education CAP goal told us that representatives from the Smithsonian Institution led an interagency working group that contributed to key efforts towards achieving the goal. Although the CAP goal updates indicate that the Smithsonian Institution is involved in federal STEM education efforts, it was not identified in a dedicated list of contributors to the goal. We have previously found that federal STEM education programs are fragmented across a number of agencies. We continue to believe that the federal government’s efforts to ensure STEM education programs are effectively coordinated must include all relevant efforts. Furthermore, the leader of the Broadband CAP goal told us that he is aware that tax deductions available to businesses making capital investments contributed to the goal by incentivizing investments in broadband. We have long referred to such deductions, along with other reductions in a taxpayer’s liability that result from special exemptions and exclusions from taxation, credits, deferrals of tax liability, or preferential tax rates, as tax expenditures. As we have previously reported, as with spending programs, tax expenditures represent a substantial federal commitment to a wide range of mission areas. We have recommended greater scrutiny of tax expenditures. Periodic reviews could help determine how well specific tax expenditures work to achieve their goals and how their benefits and costs compare to those of programs with similar goals. As previously mentioned, GPRAMA also requires OMB to identify tax expenditures that contribute to CAP goals. However, tax expenditures were not reported as contributors to the Broadband CAP goal in the quarterly status updates published on Performance.gov. Leading practices state that a clear connection between goals and day-to- day activities can help organizations better articulate how they plan to accomplish their goals. In addition, a clear connection between goals and the programs that contribute to them helps to reinforce accountability and ensure that managers keep in mind the results their organizations are striving to achieve. Milestones—scheduled events signifying the completion of a major deliverable or a set of related deliverables or a phase of work—can help organizations demonstrate the connection between their goals and day-to-day activities and that they are tracking progress to accomplish their goals. Organizations, by describing the strategies to be used to achieve results, including clearly defined milestones, can provide information that would help key stakeholders better understand the relationship between resources and results. GAO-13-174; GAO-13-228; and GAO, Managing for Results: Critical Issues for Improving Federal Agencies’ Strategic Plans, GAO/GGD-97-180 (Washington, D.C.: Sept. 16, 1997). actions, however, lacked clear time frames for completion. Figure 4 below illustrates the “next steps” identified for the Strategic Sourcing CAP goal in the update for the third quarter of fiscal year 2013. Completion status: The Real Property CAP goal update for the second quarter of fiscal year 2013 identified two planned actions as “next steps.” “After agencies submit their Revised Cost Savings and Innovation Plans to OMB, OMB will evaluate agency plans to maintain their square footage baselines, while balancing mission requirements,” and “Updates on agency square footage baselines and projects are forthcoming and will be posted on Performance.gov.” These two actions were again identified as “next steps” in the update for the third quarter of fiscal year 2013, but no update was provided on the status of the actions. By establishing planned activities that, in many of the CAP goal updates, did not have information about their alignment with the strategies they supported, their time frames for completion, or their completion status, CAP goal leaders did not fully demonstrate that they had effectively planned to support goal achievement or were tracking progress toward the goal or identified milestones. OMB did not issue formal guidance to CAP goal leaders on the types of information that were to be included in the CAP goal updates, including information about contributors and milestones. Standards for internal control in the federal government emphasize the importance of documenting policies and procedures to provide a reasonable assurance that activities comply with applicable laws and regulations, and that managers review performance and compare actual performance to planned or expected results and analyze significant differences.staff told us they provided an implementation plan template to goal leaders, which outlined the data elements to be reported in the quarterly status updates. The template was also used to collect information for internal and public reporting. Some CAP goal managers told us that OMB or PIC staff, in their role supporting the collection, analysis, and presentation of data on CAP goal performance, occasionally provided feedback on the information that the individuals submitted in draft updates that OMB reviewed before they were published on Performance.gov. For example, one CAP goal manager told us that during a review of an update submission PIC staff told him that he should develop additional milestones to be completed during a specific future fiscal year quarter. This is in contrast to the detailed guidance that OMB issued on the types of information that agencies must provide for the updates for agency priority goals (APG), which are also published quarterly on Performance.gov. The APG guidance includes explicit instructions for agencies to identify, as appropriate, the organizations, regulations, tax expenditures, policies, and other activities within and external to the agency that contribute to each APG, as well as key milestones with planned completion dates for the remainder of the goal period. Because guidance for the types of information that should have been included in the CAP goal updates was never formally established, CAP goal leaders were at a heightened risk of failing to take into account important contributors to the goal and providing incomplete information about milestones that could help demonstrate progress being made. GPRAMA Requirement for OMB Progress Reviews GPRAMA requires that, not less than quarterly, the Director of OMB, with the support of the PIC, shall review progress on the CAP goals, including progress during the most recent quarter, overall trends, and the likelihood of meeting the planned level of performance. GPRAMA also requires that, as part of these reviews, OMB categorize goals by their risk of not achieving the planned level of performance and, for those goals most at risk of not meeting the planned level of performance, identify strategies for performance improvement. As required by GPRAMA, OMB reviewed progress on CAP goals each quarter, beginning with the quarter ending June 30, 2012. This review process consisted of the collection of updated information for each CAP goal by OMB or PIC staff, and the development of a memorandum for the Director of OMB with information on the status of the CAP goals. To develop these memorandums, OMB staff told us that approximately 6 weeks after the end of each quarter, OMB and PIC staff worked with CAP goal leaders to collect updated data and information on goal metrics and milestones, and to update the narratives supporting the data. CAP goal leaders, or staff assisting leaders with the management of efforts related to the goal, would provide this information to OMB using a template for the status updates ultimately published on Performance.gov. In addition to the memorandums developed for the Director of OMB, OMB published more detailed information through the quarterly status updates available on Performance.gov. OMB and PIC staff told us that to support OMB’s quarterly review efforts, PIC staff were to conduct assessments rating the overall health of implementation efforts and goal leader engagement. They were also to assess the execution status of each goal, including the quality and trend of performance indicators. One purpose of these assessments was to identify areas where risks, such as goal leader turnover, could affect the ability to achieve the planned level of performance. Consistent with this intent, several of the quarterly OMB review memorandums we examined highlighted turnover in goal leader or deputy goal leader positions as risks, and suggested the need to find or approve replacements. Although PIC staff have been tasked with assessing these elements of CAP goal implementation, and said that there was a shared understanding between involved staff as to how these assessments would be carried out, the PIC has not documented its procedures or criteria for conducting these assessments. Standards for internal control in the federal government emphasize the importance of documenting procedures, including those for assessing performance. Without clearly established criteria and procedures, PIC staff lack a means to: consistently assess implementation efforts and execution across all goals; bring any deficiencies, risks, and recommended improvements identified to the attention of leadership; and ensure consistent application of criteria over time. While these quarterly review memorandums identified one goal as being at risk of not achieving the planned level of performance, and identified other instances where progress on goals had been slower than planned, the memorandums did not consistently outline the strategies that were being used to improve performance or address identified risks. For example, the Cybersecurity CAP goal was the one goal specifically described as being at risk of not achieving the planned level of performance, both in these memorandums and in the status updates on Performance.gov. Specifically, the memorandum for the third quarter of fiscal year 2012 identified the risk of not achieving the planned level of performance, and outlined seven specific risks facing the goal and the steps being taken to mitigate them. Similarly, the memorandum for the second quarter of fiscal year 2013 also acknowledged that some agencies were at risk of not meeting their Cybersecurity CAP goal targets. However, in contrast to the earlier memorandums, no information was included about the specific steps that were being taken to mitigate these risks, although information on planned and ongoing actions to improve government-wide implementation was included in the milestones section of the status update for that quarter on Performance.gov. The memorandum for the fourth quarter of fiscal year 2012 also acknowledged that the pace of progress on the STEM Education and Closing Skills Gaps goals had been slower than expected. While the memorandum stated that additional OMB attention was needed to support implementation and assure sufficient progress, no information on the specific strategies being employed to improve performance was mentioned. According to OMB staff, however, these memorandums were used to inform subsequent conversations with OMB leadership, which would build on the information presented in the memorandums. Furthermore, because the data necessary to track progress for some goals were unavailable, the Director of OMB would not have been able to consistently review progress for all CAP goals, or make a determination about whether some CAP goals were at risk of meeting their planned levels of performance. This fact was acknowledged in the quarterly review memorandums for quarters one and two of fiscal year 2013, which acknowledged that progress on three goals (Entrepreneurship and Small Business, Job Training, and STEM Education) was difficult to track, and that additional work was needed on data collection. However, no information on the specific steps that were being taken to address these shortcomings was included. A lack of specific information about the steps being taken to mitigate identified risk areas and improve performance could hinder the ability of OMB leadership—and others—to adequately track the status of efforts to address identified deficiencies or risks and to hold officials accountable for taking necessary actions. GPRAMA Requirement for Goal Leader and Agency Involvement in Progress Reviews As part of the quarterly review process, GPRAMA requires that the Director of OMB review each priority goal with the appropriate lead government official, and include in these reviews officials from the agencies, organizations, and program activities that contribute to the achievement of the goal. According to OMB staff, to encourage goal leaders and contributing agencies to take ownership of efforts to achieve the goals, OMB gave goal leaders flexibility to use different approaches to engage agency officials and review progress at the CAP-goal level. While guidance released by OMB in August 2012 encouraged goal leaders to leverage existing interagency working groups, committees, and councils in the management of the goals as much as practicable, it did not include information on the purpose of reviews, expectations for how reviews should be conducted to maximize their effectiveness as a tool for performance management and accountability, or the roles that CAP goal leaders and agency officials should play in the review process. Again, standards for internal control in the federal government emphasize the importance of documenting procedures for reviewing performance against established goals and objectives. This is in contrast to the detailed guidance that OMB released for agency priority goal and agency strategic objective reviews, which outlined the specific purposes of the reviews, how frequently they should be conducted, the roles and responsibilities of agency leaders involved in the review process, and how the reviews should be conducted. We also found that this guidance for reviews at the agency level was broadly consistent with the leading practices for performance reviews that we previously identified. While no official guidance was published to guide how reviews involving goal leaders and staff from contributing agencies could be conducted for the CAP goals, OMB staff said the principles of the guidance released for agency reviews, which reflected many of the leading practices, was referenced in conversations with CAP goal leaders and teams. OMB has emphasized that flexibility is needed to ensure that goal leaders can use review processes that are appropriate given the scope of interagency efforts, the number of people involved, and the maturity of existing reporting and review processes. The guidance for agency reviews gave agencies flexibility to design their performance review processes in a way that would fit the agency’s mission, leadership preferences, organizational structure, culture, and existing decision- making processes. In our previous work, we detailed how several federal agencies had implemented quarterly performance reviews in a manner consistent with leading practices, but which were also tailored to the structures, processes, and needs of each agency. In this way, flexible implementation of review processes is possible within a framework that encourages the application of leading practices. A lack of clear expectations for how progress should be reviewed at the CAP-goal level resulted in a number of different approaches being used by goal leaders to engage officials from contributing agencies to review progress on identified goals and milestones, ranging from regular in- person review meetings led by the CAP goal leader to the review of written updates provided to the goal leader by officials from contributing agencies. See appendix IV for more detailed information on the various processes used by goal leaders to collect data on, and review progress towards, identified goals. Instituting review processes consistent with the leading practices we previously identified can help ensure that reviews include meaningful performance discussions, provide opportunities for oversight and accountability, and drive performance improvement. Taken together, these leading practices emphasize the importance of leadership involvement in the review process, data collection and review meeting preparation, participation by key officials, and rigorous follow-up. Through our evaluation of how goal leaders and contributing agency officials reviewed progress towards the interim goals, we identified two CAP goals—Cybersecurity and Closing Skills Gaps—and one sub-goal— the Entrepreneurship and Small Business sub-goal on improving access to government services and information (BusinessUSA sub-goal)—where goal managers instituted in-person review processes with officials from contributing agencies that were broadly consistent with the full range of leading practices for reviews, which we have summarized in four categories below. The processes used by other CAP goal leaders to engage agency officials in the review of progress did not reflect the full range of leading practices. Leadership Involvement. Leading practices indicate that leaders should use frequent and regular progress reviews as a leadership strategy to drive performance improvement and as an opportunity to hold people accountable for diagnosing performance problems and identifying strategies for improvement. The direct and visible engagement of leadership is vital to the success of such reviews. Leadership involvement helps ensure that participants take the review process seriously and that decisions and commitments can be made. The goal leaders managing the Cybersecurity and Closing Skills Gaps goals, as well as the BusinessUSA sub-goal, were directly involved in leading in-person reviews for these goals, and in using them as opportunities to review progress, identify and address performance problems, and hold agency officials accountable for progress on identified goals and milestones, as detailed in table 1. Data Collection and Review Meeting Preparation. Leading practices also indicate that those managing review processes should have the capacity to collect, analyze, and communicate accurate, useful, and timely performance data, and should rigorously prepare for reviews to enable meaningful performance discussions. The collection of current, reliable data on the status of activities and progress towards goals and milestones is critical so that those involved can determine whether performance is improving, identify performance problems, ensure accountability for fulfilling commitments, and learn from efforts to improve performance. The ability to assess data to identify key trends and areas of strong or weak performance, and to communicate this to managers and staff effectively through materials prepared for reviews, is also critical. As detailed in table 2, those supporting the Cybersecurity and Skills Gap goals, and the BusinessUSA sub-goal, instituted processes to regularly collect and analyze data on progress towards identified goals and milestones, and to ensure these data would be communicated through materials prepared for review meetings. Participation by Key Officials. Leading practices indicate that key players involved in efforts to achieve a goal should attend reviews to facilitate problem solving. This is critical as their participation enables those involved to break down information silos, and to use the forum provided by the review to communicate with each other, identify improvement strategies, and agree on specific next steps. Reviews for both the Cybersecurity and Closing Skills Gaps CAP goals, and the BusinessUSA sub-goal, were structured so that relevant agency officials playing a key role in efforts to carry out the goal were included, as detailed in table 3. Review Follow-Up. Leading practices indicate that participants should engage in sustained follow-up on issues identified during reviews, which is critical to ensure the success of the reviews as a performance improvement tool. Important follow-up activities include identifying and documenting specific follow-up actions stemming from reviews, those responsible for each action item, as well as who will be responsible for monitoring and follow-up. Follow-up actions should also be included as agenda items for subsequent reviews to hold responsible officials accountable for addressing issues raised and communicating what was done. Goal managers for the Cybersecurity and Closing Skills Gap CAP goals, as well as the BusinessUSA sub-goal, took steps to follow up on action items identified in these meetings, and to ensure that steps were taken towards their completion, as detailed in table 4. Review Effects. Goal leaders and managers we interviewed said that these review processes were valuable in driving improved performance, establishing a greater sense of accountability for progress on the part of contributors, and in providing a forum for interagency communication and collaboration. For example, according to DHS staff involved in the management of the Cybersecurity CAP goal, implementation of Personal Identity Verification (PIV) requirements across the federal government had been stagnant for several years prior to the introduction of cybersecurity as a CAP goal. The review process was used to hold agencies accountable for improved PIV implementation, which helped bring an increased focus on the issue and drive recent progress. Since the reviews were instituted in 2012, DHS has reported improved PIV adoption in civilian agencies, which has increased from 1.24 percent in fiscal year 2010, to 7.45 percent in fiscal year 2012, to 19.67 percent in the fourth quarter of fiscal year 2013. According to data from DHS, while still falling short of the target, this has contributed to the overall increase in PIV adoption across the federal government—including both civilian agencies and the Department of Defense—from 57.26 percent in fiscal DHS year 2012 to 66.61 percent in fourth quarter of fiscal year 2013.staff also added that agencies generally had not previously collaborated on cybersecurity issues or worked to identify best practices. According to DHS staff, the reviews have created an important point of collaboration between DHS, OMB, National Security Staff, and agencies, and provided an opportunity to inform agencies of best practices and connect them with other agencies that are meeting their targets to learn from them. Similarly, OPM officials and sub-goal leaders involved in the management of the Closing Skills Gap CAP goal said that the quarterly review meetings were a critical means to ensure sub-goal leaders and staff were demonstrating progress. Having sub-goal leaders report out on progress, and hear about the progress made in other sub-goal areas, provided additional pressure for continuous improvement and the need to remain focused on driving progress towards their goals. Having the goal leader lead the review was also a way to demonstrate leadership commitment to the achievement of each sub-goal. According to OPM officials and sub- goal leaders, the review meetings also served as an important forum for discussing innovative approaches being taken to address skills gaps in different areas, opportunities for collaboration to address challenges shared by different sub-goals, and how leaders could leverage the efforts of other sub-goals to drive progress on their own. The BusinessUSA sub-goal leader said that having it as the basis for a CAP sub-goal elevated the cross cutting nature of the initiative. In addition to reviewing performance information and the status of deliverables, discussions at inter-agency Steering Committee meetings were used to discuss how contributors could work together to meet the initiative’s performance goals. This communication and coordination led to connections between agencies and to discussions about how programs could be working in a more integrated way. For example, these discussions were used to identify ways that programs could more effectively integrate program information on the BusinessUSA website to increase customer satisfaction. We found that the processes used by other CAP goal leaders to engage agency officials in the review of progress, which are summarized in appendix IV, did not reflect the full range of leading practices. For example, the process for reviewing progress on the Job Training CAP goal involved staff from the PIC collecting updates on recent milestones from agencies, which were then compiled in the quarterly status update and reviewed by the goal leader. This approach was used by the goal leaders for the Broadband and STEM Education CAP goals to review progress as well. While goal leaders and managers for these goals indicated that they used the collection and review of information as an opportunity to communicate with officials from contributing agencies, this approach contrasts with OMB guidance for reviews of agency priority goals, which states explicitly that performance reviews should not be conducted solely through the sharing of written communications. As OMB noted in its guidance, in-person engagement of leaders in performance reviews greatly accelerates learning and performance improvement, and personal engagement can demonstrate commitment to improvement, ensure coordination across agency silos, and enable rapid decision making. While not employing the full range, goal leaders for a number of goals did use processes that reflected one or more leading practices. For example, many CAP goal leaders led or participated in interagency meetings with representatives of contributing agencies. While these were used to facilitate interagency communication and collaboration on the development of plans and policies, it was unclear whether many of these meetings were consistently used to review progress on identified CAP goals and milestones. The goal leader for the Strategic Sourcing CAP goal used processes that reflected leadership involvement, participation by key officials, and the collection and analysis of relevant data. Specifically, according to goal managers, the goal leader led regular meetings of the Strategic Sourcing Leadership Council (SSLC), which were attended by senior procurement officials from eight agencies that combine to make up almost all of the federal government’s total procurement spending. To prepare for each SSLC meeting, staff from OMB’s Office of Federal Procurement Policy (OFPP) held a meeting for supporting staff from each agency, who would then prepare the SSLC member from their agency for the issues to be discussed in the SSLC meeting. OFPP also established a regular data collection process where each agency would report on its adoption and spending rates for two strategic sourcing options, which would then be used for the purposes of reporting on the CAP goal. However, it was unclear how regularly, if at all, SSLC meetings were used to engage agency officials in the review of data on agency adoption of, and spending on, strategic sourcing options, or how regularly meetings were used to review progress that was being made towards the CAP goal. It was also unclear what mechanisms, if any, were used to ensure rigorous follow-up on issues raised in these meetings, a key leading practice, as there were no official meeting minutes maintained. The lack of an official record could hinder follow-up and accountability for any identified actions that need to be taken. Representatives of some goals stated that it was difficult to isolate the impact of the CAP goal designation, and its associated reporting and review requirements, on performance and collaboration. According to some goal managers, because their interim goals were based on initiatives that had been previously established in executive orders or Presidential memorandums, much of the interagency activity supporting their efforts would have happened without the CAP goal designation and its reporting and review requirements. For example, a manager for the Data Center Consolidation CAP goal told us that the previously established Federal Data Center Consolidation Initiative was used to drive progress and that the CAP goal designation and quarterly reporting and review requirements had little impact. Similarly, Job Training CAP goal managers said that interagency collaboration on job training issues had been established prior to the creation of the CAP goal, that the goal’s reporting and review requirements were incidental to the contributors’ ongoing work, and that it did not add an additional level of accountability for the completion of job training initiatives. However, this is a goal where no data were reported to demonstrate its impact on federal job training programs, and which was identified in multiple OMB reviews as having slower than anticipated progress due, in part, to extended periods of time in which there was no deputy CAP goal leader to provide support necessary to improve coordination and collaboration. While many CAP goal leaders and staff we interviewed noted the progress they had made with their existing interagency meetings and approaches, a lack of clear expectations or guidance for how review processes at the CAP goal level should be carried out can lead to a situation where reviews are implemented in a manner that is not informed by, or fully consistent with, leading practices. This could result in missed opportunities to realize the positive effects on performance and accountability that can stem from the implementation of review processes that regularly and consistently involve leaders and agency officials in the analysis of performance data to identify and address performance deficiencies, and use rigorous follow-up to ensure accountability for commitments. Many of the meaningful results that the federal government seeks to achieve require the coordinated efforts of more than one federal agency. GPRAMA’s requirement that OMB establish CAP goals offers a unique opportunity to coordinate cross-agency efforts to drive progress in priority areas. That opportunity will not be realized, however, if the CAP goal reporting and review requirements and leading review practices are not followed. The reporting and review requirements for the CAP goals, and leading practices for the reviews, are designed to ensure that relevant performance information is used to improve performance and results, and that OMB and goal leaders actively lead efforts to engage all relevant participants in collaborative performance improvement initiatives and hold them accountable for progress on identified goals and milestones. OMB reported performance information in the quarterly CAP goal status updates it published on Performance.gov. While updates for most goals reported data on performance towards the identified planned level of performance, the information in the updates did not always present a complete picture of progress towards identified goals and milestones. For example, while updates for 8 of the 14 goals included data that indicated performance towards the identified overall planned level of performance, only 3 also contained annual or quarterly targets that allowed for an assessment of interim progress. Updates for the other 6 of the 14 goals did not report on performance towards the goal’s primary performance target because the goal was established without a quantitative target or because goal managers were unable to collect the data needed to track performance. In other cases, planned activities that were identified as contributing to the goal were sometimes missing important elements, including alignment with the strategies for goal achievement they supported, a time frame for completion, or information on their implementation status. The incomplete picture of progress that many of the updates gave limited the ability of goal leaders and others to ensure accountability for the achievement of targets and milestones. Holding regular progress reviews that are consistent with GPRAMA requirements and the full range of leading practices can produce positive effects on performance and collaboration. Engaging contributors in regular reviews of data on performance can help ensure interagency efforts are informed by information on progress towards identified goals and milestones, which can be used to identify and address areas where goal or milestone achievement is at risk. Reviews can also be used to reinforce agency and collective accountability for the achievement of individual and shared outcomes, helping to ensure that efforts to improve performance or address identified risks are implemented. Lastly, reviews can be used to foster greater collaboration, ensuring opportunities for communication and coordination between officials involved in efforts to achieve shared outcomes. While OMB and CAP goal leaders instituted processes for reviewing progress on the interim CAP goals, if GPRAMA requirements and leading practices for reviews are not consistently followed, it may result in missed opportunities to improve performance, hold officials accountable for achieving identified goals and milestones, and ensure agency officials are coordinating their activities in a way that is directed towards the achievement of shared goals and milestones. We recommend that the Director of OMB take the following three actions: Include the following in the quarterly reviews of CAP goal progress, as required by GPRAMA: a consistent set of information on progress made during the most recent quarter, overall trends, and the likelihood of meeting the planned level of performance; goals at risk of not achieving the planned level of performance; and the strategies being employed to improve performance. Work with the PIC to establish and document procedures and criteria to assess CAP goal implementation efforts and the status of goal execution, to ensure that the PIC can conduct these assessments consistently across all goals and over time. Develop guidance similar to what exists for agency priority goal and strategic objective reviews, outlining the purposes of CAP goal progress reviews, expectations for how the reviews should be carried out, and the roles and responsibilities of CAP goal leaders, agency officials, and OMB and PIC staff in the review process. To ensure that OMB and CAP goal leaders include all key contributors and can track and report fully on progress being made towards CAP goals overall and each quarter, we recommend that the Director of OMB direct CAP goal leaders to take the following four actions: Identify all key contributors to the achievement of their goals; Identify annual planned levels of performance and quarterly targets for each CAP goal; Develop plans to identify, collect, and report data necessary to demonstrate progress being made towards each CAP goal or develop an alternative approach for tracking and reporting on progress quarterly; and Report the time frames for the completion of milestones; the status of milestones; and how milestones are aligned with strategies or initiatives that support the achievement of the goal. We provided a draft of this report for review and comment to the Director of OMB, the Secretaries of Commerce and Homeland Security, the Director of the Office of Personnel Management, the Administrator of the Small Business Administration, as well as the officials we interviewed to collect information on the interim CAP goals from the Council on Environmental Quality, Department of Education, Department of Labor, Department of Veterans Affairs, National Science Foundation, and the Office of Science and Technology Policy. OMB and PIC staff provided oral comments on the draft, and we made technical changes as appropriate. OMB staff generally agreed to consider our recommendations. For example, while they said that OMB and PIC staff will continue to work directly with CAP goal leaders to convey suggested practices for reviewing performance, they will consider referencing principles and practices for data-driven performance reviews in future Circular A-11 guidance related to the management of CAP goals. Furthermore, while they noted that quantitative performance data for some key measures may not available on a quarterly basis, they said that they will continue to work to develop more robust quarterly targets. Officials or staff from the Departments of Commerce and Veterans Affairs, and the Office of Science and Technology Policy provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Director of OMB as well as appropriate congressional committees and other interested parties. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. This report is part of our response to a mandate that we evaluate the implementation of the federal government priority goals under the GPRA Modernization Act of 2010 (GPRAMA). Due to the timing of our work, we focused on the implementation of the reporting and review requirements for the 14 interim cross-agency priority (CAP) goals established in February 2012.about progress made towards the interim CAP goals; and (2) how, if at all, quarterly progress reflected GPRAMA requirements and leading practices for data-driven reviews, as well as how they contributed to improved cross-agency performance and collaboration. Specifically, this report assesses (1) what is known To address these objectives, we interviewed representatives of 13 of the 14 interim goals. For 8 of the 13 goals we spoke directly with the goal leader or deputy goal leader, along with, in some cases, staff from Office of Management and Budget (OMB) and agencies involved in supporting efforts related to the goals. For the other five goals (Closing Skills Gaps, Cybersecurity, Data Center Consolidation, Exports, and Job Training) we met with agency officials or OMB staff playing a key role in the management of interagency efforts related to the CAP goal. During these interviews, we asked officials questions concerning how the goal leader and officials from contributing agencies reviewed progress on the goal; the interagency groups used to engage agency officials and manage efforts related to the goal; the role that staff from OMB and the Performance Improvement Council (PIC) played in the review process; and any impact the CAP goal designation and review processes had on performance, collaboration, and accountability. We also participated in interviews with the goal leaders of 11 agency priority goals that were aligned with, or identified as a contributor to, a CAP goal. To further address the first objective, and assess what is known about progress made toward the interim CAP goals, we analyzed information on identified performance metrics and milestones included in the quarterly status updates for each CAP goal published on Performance.gov. We also analyzed relevant information collected through our interviews with CAP goal leaders, deputies, and supporting staff. We compared the data and information made available through the quarterly status updates with requirements in GPRAMA that Performance.gov include information for each goal on results achieved during the most recent quarter and overall trend data. To assess the reliability of performance data and information available through Performance.gov we collected information from OMB and PIC staff, and CAP goal representatives, about data quality control procedures. We determined that the data and information were sufficiently reliable for our analysis of what was reported on Performance.gov about progress towards identified goals and milestones. To address the second objective, we reviewed quarterly review memorandums developed for OMB leadership for five quarters, from the third quarter of fiscal year 2012 to the third quarter of fiscal year 2013.We compared the contents of these review memorandums with requirements for the OMB quarterly reviews established in GPRAMA. We also interviewed staff from OMB and the PIC to discuss the various approaches being used to review progress at the CAP-goal level, the data collection and review process, and the role of the PIC in supporting the quarterly review process. To further address the second objective we reviewed (where available) documents created for interagency meetings, such as meeting agendas, presentation materials, meeting notes, and attendee lists. We also observed one quarterly review meeting held for the Closing Skills Gap goals, and conducted interviews with sub-goal leaders from the Closing Skills Gaps and Entrepreneurship and Small Business CAP goals. These interviews were used to learn more about the involvement of officials from contributing agencies in the quarterly review process for each CAP goal, the processes that had been established to review progress at the sub- goal level, and to gain a more complete picture of participating agency officials’ perceptions of the impact of the CAP goals and review processes. We selected these sub-goals through a two-part process. Of the eight CAP goals for which we had completed interviews through the end of 2013, the team selected one goal for which the goal leader held quarterly meetings dedicated to reviewing progress toward the CAP goal with the goal’s contributors (Closing Skills Gaps). The team also selected a second goal for which the goal leader used a review process that did not rely on quarterly meetings between the goal leader and contributing agencies (Entrepreneurship and Small Business). To ensure that the team would have at least one goal representing each type of goal, the team also ensured that one goal would be an outcome-oriented policy goal and one goal would be a management goal. For both the Closing Skills Gaps and Entrepreneurship and Small Business CAP goals the team then selected four sub-goals for interviews. For the Closing Skills Gaps CAP goal the team interviewed the sub-goal leaders for the Economist; Information Technology/Cybersecurity; Science, Technology, Engineering, and Mathematics (STEM) Education; and Human Resources sub-goals. For the Entrepreneurship and Small Business CAP goal the team held interviews with the sub-goal leaders for the sub-goals to “Accelerate commercialization of Federal research grants,” “Advance federal small business procurement goals,” “Improve access to government services and information,” and “Streamline immigration pathways for immigrant entrepreneurs.” These were selected to ensure that the team would capture sub-goals in which a range of approaches for measuring and reviewing progress were being used. Specifically, sub- goals were selected to ensure the team would have some that did, and did not, hold regular meetings, and some that did, and did not, track quantitative measures of performance or milestones with time frames. Our selection of these sub-goals was nonstatistical and therefore our findings from these interviews are not generalizable to the other CAP goals. We compared what we learned about review processes at the CAP goal and sub-goal levels, through interviews and the collection of documentation, used by leaders from each goal against leading practices for performance reviews previously identified by GAO. Because the scope of our review was to examine the implementation of quarterly progress reviews, we did not evaluate whether these goals were appropriate indicators of performance, sufficiently ambitious, or met other dimensions of quality. We conducted our work from May 2013 to June 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Goal Statement As part of expanding all broadband capabilities, ensure 4G wireless broadband coverage for 98 percent of Americans by 2016. Close critical skills gaps in the federal workforce to improve mission performance. By September 30, 2013, close the skills gaps by 50 percent for three to five critical federal government occupations or competencies, and close additional agency-specific high risk occupation and competency gaps. Executive branch departments and agencies will achieve 95 percent implementation of the administration’s priority cybersecurity capabilities by the end of FY 2014. These capabilities include strong authentication, trusted Internet connections, and continuous monitoring. Improve information technology service delivery, reduce waste, and save $3 billion in taxpayer dollars by closing at least 2,500 data centers by fiscal year 2015. Increase energy productivity (amount of real gross domestic product in dollars/energy demand) 50 percent by 2030. Increase federal services to entrepreneurs and small businesses with an emphasis on 1) startups and growing firms and 2) underserved markets. Double U.S. exports by the end of 2014. The federal government will achieve a payment accuracy rate of 97 percent by the end of 2016. Ensure our country has one of the most skilled workforces in the world by preparing 2 million workers with skills training by 2015 and improving the coordination and delivery of job training services. The federal government will maintain the fiscal year 2012 square footage baseline of its office and warehouse inventory. In support of the president’s goal that the U.S. have the highest proportion of college graduates in the world by 2020, the federal government will work with education partners to improve the quality of STEM education at all levels to help increase the number of well- prepared graduates with STEM degrees by one-third over the next 10 years, resulting in an additional 1 million graduates with degrees in STEM subjects. Reduce the costs of acquiring common products and services by agencies’ strategic sourcing of at least two new commodities or services in both 2013 and 2014, that yield at least a 10 percent savings. In addition, agencies must increase their use of Federal Strategic Sourcing Initiative vehicles by at least 10 percent in both fiscal years 2013 and 2014. By 2020, the federal government will reduce its direct greenhouse gas emissions by 28 percent and will reduce its indirect greenhouse gas emissions by 13 percent by 2020 (from 2008 baseline). By September 30, 2013, increase the percent of eligible service members who will be served by career readiness and preparedness programs from 50 percent to 90 percent in order to improve their competitiveness in the job market. Goal leaders for 13 of 14 cross-agency priority (CAP) goals leveraged interagency groups for the purposes of coordinating efforts designed to contribute to progress on the cross-agency priority goal. This appendix includes information on the membership of these interagency groups, the frequency with which they met, and the purposes of those meetings. Membership Fourteen agencies with federal property management or transportation funding responsibilities, and broadband or other related expertise. To discuss best practices on broadband-related land management issues, and actions to implement an executive order on accelerating broadband infrastructure deployment. Senior officials from agencies considered major spectrum stakeholders and users of spectrum, including the Departments of Defense, Justice, Homeland Security (DHS), Commerce, and the National Aeronautics and Space Administration (NASA). To provide advice on spectrum policy and strategic plans, discuss commercial transfer of federal agency spectrum, and resolve issues affecting federal/non-federal users. To review progress on performance metrics and actions taken to close skills gaps in each of the six sub- goal areas. Officials from National Institute of Standards and Technology, General Services Administration (GSA), DHS, National Security Staff, Office of Management and Budget (OMB) and Performance Improvement Council. Twice each quarter Beginning in 2013, a meeting was held each quarter prior to the collection of data on agency progress on cybersecurity metrics. Another was held after data had been collected and analyzed to review and discuss agency progress. Data center consolidation program managers from 24 federal agencies. To identify and disseminate key information about solutions and processes to help agencies make progress towards data center consolidation goals. Interagency group No interagency groups were used to manage efforts related to this goal. Interagency groups were used to manage efforts at the sub-goal level. Senior-level representatives from 24 participating agencies. To oversee strategy, resources and timetables for the development of the BusinessUSA website, resolve interagency issues and ensure department/agency viewpoints are represented. Mid-to-senior level program, technology and customer service managers from 24 participating agencies. To assist the BusinessUSA program management office coordinate the design, development, and operation of the BusinessUSA website, and to track and monitor performance metrics on customer service and outcomes. SBIR/STTR program managers from11 agencies, and coordinating officials from Small Business Administration (SBA) and Office of Science and Technology Policy (OSTP) To discuss the development of SBIR/STTR program policy directives, the implementation of requirements, outreach and access to the programs, and program best practices. To provide updates on relevant agency activities and identify opportunities for interagency collaboration. To share best practices for expanding contracting to small and disadvantaged businesses, and reviewing progress on agency simplified-acquisition threshold goals. To provide officials from the White House, SBA, Commerce, and OMB with an opportunity to meet with senior agency leaders and discuss the steps agencies are taking to increase small business contracting. Membership Principals (cabinet secretaries and deputies) and staff from 20 agencies involved in export policy, service, finance, and oversight. To review progress on deliverables supporting the National Export Strategy, communications, and the status of individual export promotion initiatives. Bi-weekly to monthly To review the status of agency implementation of Do Not Pay requirements and milestones, and guidance for implementation. Officials from agencies with “high-priority” programs, as designated by OMB. To discuss the government- wide improper payment initiative and overall strategy. To discuss expanding access to job training performance data, and opportunities to promote its use at the local, state, and federal levels. Among other policy discussions, to discuss the development of agency “Freeze the Footprint” plans. To discuss policy to guide the federal government on sustainability issues, and to discuss sustainability goals. Every 4-6 weeks, during the development of the 5-year strategic plan. To develop a 5-year strategic plan for federal support for STEM education. Representatives from Departments of Defense, Energy, and Veterans Affairs (VA), DHS, HHS, GSA, NASA, and SBA. To discuss the development and adoption of strategic sourcing options. To review ongoing policy initiatives and opportunities for collaboration between agencies. To develop and implement a redesigned veterans transition program. meetings with officials from each agency. According to OMB staff, during these reviews participants reviewed metrics from across the agency’s information technology portfolio, which included, in some cases, those related to data center consolidation. Each quarter staff supporting the goal leader would collect updated information on contributing agency priority goals for the purposes of updating the quarterly status update. Each quarter the deputy goal leader would collect updated information on goals and milestones from the leaders of each of 10 sub-goals for the purposes of developing the quarterly status update. The deputy goal leader would follow-up with sub-goal leaders or agency officials, as necessary, to address issues or questions about the status of efforts. The goal leader would then review and approve the quarterly status update. Some sub-goal leaders would hold in-person meetings with officials from contributing agencies to, among other things, review progress on identified goals and milestones. See appendix III for information on interagency groups that were used to manage efforts for four of the sub-goals. Each quarter the goal leader, with the assistance of staff from Commerce and the PIC, would collect updated information on relevant agency metrics and activities for the purposes of updating the quarterly status update. Periodic meetings of the Export Promotion Cabinet/Trade Promotion Coordinating Committee, and its Small Business and other working groups, were also used to discuss the status of export promotion efforts and progress on specific deliverables. Each year OMB would collect and report data on agency improper payment rates. Staff from the OMB Office of Federal Financial Management led monthly meetings with agency representatives to discuss the implementation of the Do Not Pay initiative, which was designed to contribute to the reduction of improper payments. The Department of Treasury, as the agency leading implementation of the Do Not Pay initiative, would track agency progress on implementation milestones. Each quarter staff from the PIC would collect updated information on progress towards agency milestones, and work with the goal leader on the development of the quarterly status update. After this goal was revised in the second quarter of 2013, a new review process to track agency adherence to the goal was under development by OMB. Twice a year the Council on Environmental Quality (CEQ) would collect and review quantitative and qualitative data on agency progress towards established sustainability goals, including the reduction of agency greenhouse gas emissions. Following the collection of these data, the goal leader hosted meetings of the Steering Committee on Federal Sustainability, which were used to discuss federal sustainability policy and progress on sustainability goals. According to CEQ staff, the goal leader and CEQ staff would meet with representatives from agencies about sustainability issues on an ad hoc basis. In instances where there was a gap between an agency’s actual performance and the target established in that area, the goal leader, or other staff from CEQ, would meet with officials from that agency to discuss ways to address the performance gap. Each quarter the goal leader would collect updated information on agency milestones for inclusion in the quarterly status updates. Progress on some identified strategies to achieve the goal, such as the National Science Foundation’s efforts to improve undergraduate STEM education, were reviewed at the agency level. After progress was reviewed at the agency level the information was passed onto the goal leader and reported publicly in the quarterly status update. Each quarter the General Services Administration would collect data on agency adoption and spending rates for the Federal Strategic Sourcing Initiative (FSSI) solutions for domestic delivery and office supplies. The Strategic Sourcing Leadership Council met bi-monthly to guide the creation and adoption of new FSSI options, and, as part of that effort, might review quarterly data on agency adoption and spending rates. According to the goal leader, each month staff from the Departments of Defense and Veterans Affairs, and the PIC, would provide data for “one-pagers” and other status update documents with key pieces of relevant information, such as the veterans’ unemployment rate and the number of active employers on the Veteran’s Job Bank. These one-pagers would be used to inform regular Interagency Policy Council (IPC) discussions, along with more specific briefing memorandums, which were used to cover the latest issues, keep stakeholders focused on overall outcomes, and to inform discussion around specific outliers. Some of the data in these one-pagers would also be incorporated into the quarterly status updates. More frequently, issue papers and data analysis were provided to Veterans Employment Initiative (VEI) Task Force and IPC members as needed to address topical issues. Ongoing milestone reviews held by the VEI Task Force and its associated working groups on Education, Employment, Transition, and Entrepreneurship, provided an opportunity to discuss strategies being employed to improve performance. This appendix includes the print version of the text and rollover graphics contained in interactive figure 2. Overall Planned Level of Performance …achieve 95 percent implementation of the Administration’s priority cybersecurity capabilities by the end of fiscal year 2014. Data reported for primary performance goal …save $3 billion in taxpayer dollars by closing at least 2500 data centers by fiscal year 2015. “Agencies have already closed 640 data centers…” Overall Planned Level of Performance Double U.S. exports by the end of 2014. Frequency of Data Reporting for Overall Goal Quarterly …agencies’ strategic sourcing of at least two new commodities or services in both 2013 and 2014, that yield at least a 10 percent savings… In addition, agencies must increase their use of Federal Strategic Sourcing Initiative vehicles by at least 10 percent in both fiscal years 2013 and 2014. …ensure 4G wireless broadband coverage for 98 percent of Americans by 2016. Overall Planned Level of Performance …achieve a payment accuracy rate of 97 percent by the end of 2016. “Data Not Reported” Increase federal services to entrepreneurs and small businesses with an emphasis on 1) startups and growing firms and 2) underserved markets. “Data Not Reported” “Data Not Reported” The Federal Government will maintain the fiscal year 2012 square footage baseline of its office and warehouse inventory. “Data Not Reported” …increase the number of well-prepared graduates with STEM degrees by one-third over the next 10 years, resulting in an additional 1 million graduates with degrees in STEM subjects. Frequency of Data Reporting for Overall Goal “Data Not Reported” “Data Not Reported” In addition to the contact named above, Elizabeth Curda (Assistant Director) and Adam Miles supervised the development of this report. Virginia Chanley, Jehan Chase, Steven Putansu, Stacy Ann Spence, and Dan Webb made significant contributions to this report. Deirdre Duffy and Robert Robinson also made key contributions.
The federal government faces complex, high-risk challenges, such as protecting our nation's critical information systems. Effectively managing these challenges is essential for national and economic security and public health and safety. However, responsibility for addressing these challenges often rests with multiple agencies. To effectively address them, shared goals and cross-agency collaboration are fundamental. This report responds to GAO's mandate to evaluate the implementation of GPRAMA. It assesses (1) what is known about progress made towards the interim CAP goals; and (2) how, if at all, quarterly progress reviews reflected GPRAMA requirements and leading practices for reviews, as well as how reviews contributed to improved cross-agency performance and collaboration. To address these objectives, GAO analyzed CAP goal status updates and other documents from OMB and CAP goal progress-review meetings, and interviewed OMB staff and CAP goal representatives. GAO compared this information to GPRAMA requirements and to leading practices for performance reviews previously reported on by GAO. CAP Goal Progress. The GPRA Modernization Act of 2010 (GPRAMA) requires the Office of Management and Budget (OMB) to coordinate with agencies to: (1) establish outcome-oriented, federal government priority goals (known as cross-agency priority, or CAP, goals) with annual and quarterly performance targets and milestones; and (2) report quarterly on a single website now known as Performance.gov the results achieved for each CAP goal compared to the targets. In February 2012, OMB identified 14 interim CAP goals and subsequently published five quarterly updates on the status of the interim CAP goals on Performance.gov. While updates for eight of the goals included data that indicated performance towards an overall planned level of performance, only three also contained annual or quarterly targets that allowed for an assessment of interim progress. Updates for the other six goals did not report on progress towards a planned level of performance because the goals lacked either a quantitative target or the data needed to track progress. The updates on Performance.gov also listed planned activities and milestones contributing to each goal, but some did not include relevant information, including time frames for the completion of specific actions and the status of ongoing efforts. The incomplete information in the updates provided a limited basis for ensuring accountability for the achievement of targets and milestones. OMB Quarterly Progress Reviews. GPRAMA also requires that OMB—with the support of the Performance Improvement Council (PIC)—review CAP goal progress quarterly with goal leaders. OMB instituted processes for reviewing progress on the goals each quarter, which involved the collection of data from goal leaders and the development of a memorandum for the OMB Director. However, the information included in these memorandums was not fully consistent with GPRAMA requirements. For example, GPRAMA requires OMB to identify strategies for improving the performance of goals at risk of not being met, but this was not consistently done. Without this information, OMB leadership and others may not be able to adequately track whether corrective actions are being taken, thereby limiting their ability to hold officials accountable for addressing identified risks and improving performance. Leading Practices for Reviews. At the CAP-goal level, goal leaders for two CAP goals and one sub-goal instituted in-person progress reviews with officials from contributing agencies that were broadly consistent with the full range of leading practices for reviews, such as leadership involvement in reviews of progress on identified goals and milestones, and rigorous follow-up on issues identified through these reviews. In these cases, goal managers reported there were positive effects on performance, accountability, and collaboration. In contrast, review processes used by other goal leaders did not consistently reflect the full range of leading practices. Effective review processes consistently engage leaders and agency officials in efforts to identify and address performance deficiencies, and to ensure accountability for commitments. Thus, not using them may result in missed opportunities to hold meaningful performance discussions, ensure accountability and oversight, and drive performance improvement. GAO is making seven recommendations to OMB to improve the reporting of performance information for CAP goals and ensure that CAP goal progress reviews meet GPRAMA requirements and reflect leading practices. OMB staff generally agreed to consider GAO's recommendations.
Education and Justice have an initiative underway to support local and statewide school discipline initiatives that build positive school climates while keeping students in school. Education has made school discipline reform a priority and recently launched its #RethinkDiscipline campaign to increase awareness about the detrimental impacts of exclusionary discipline. As part of this awareness campaign, Education has developed a webpage where administrators, educators, students, parents and community members can find data and resources to increase their awareness of the prevalence, impact, and legal implications of suspension and expulsion. This webpage contains, among other things, guidance for addressing the behavior needs of students with disabilities and a directory of federal school climate and discipline resources available to schools and districts. Of the approximately 87,000 public school students in D.C. in school year 2015-16, about 45 percent attended charter schools, while about 55 percent attended traditional public schools. Charter schools in D.C. serve students ranging from pre-kindergarten (pre-K) through grade 12. D.C. charter schools offer a range of focuses and specialized curricula, such as foreign language immersion or a focus on serving students who have not been successful in traditional public school settings. For the vast majority of public charter schools, students enroll through D.C.’s common lottery system, My School DC. D.C. charter schools, like all public schools, must comply with various laws governing the education of children, including those pertaining to individuals with disabilities, civil rights, and health and safety conditions. Further, in January 2014 guidance, Education and Justice stated that school districts that receive federal funds must not intentionally discriminate on the basis of race, color, or national origin, and must not implement any policies that have the effect of discriminating against students on the basis of race, color, or national origin. In addition, charter schools are to be held accountable for their financial and educational performance, including the testing requirements under the Elementary and Secondary Education Act of 1965 (ESEA), as amended. In D.C., all charter schools are nonprofit organizations and are required to be governed by a board of trustees. Members of the board of trustees are selected according to terms laid out in the school’s charter, and the board assumes a fiduciary role and sets the overall policy for the school. Some D.C. charter schools are part of larger charter school networks that have schools in other states, such as KIPP or BASIS Schools. While some charter schools are managed by charter management organizations—which may handle, for example, curriculum development, teacher recruitment and training, and operational support services for the charter school—other charter schools are single-school networks and operate without such an entity. In the 2015-16 school year, there were 114 charter schools in D.C., run by 65 different organizations. In the District, each charter school or group of charter schools functions as its own local educational agency (LEA), both for purposes of Title I of ESEA, and other purposes. As such, each charter school or group of charter schools is responsible for a wide range of functions associated with being an LEA, such as applying for certain federal grants and acquiring and maintaining facilities. PCSB officials told us that each charter LEA also has the autonomy to establish its own discipline policies and suspend and expel students. Officials from the State Board of Education also told us that, unlike D.C. traditional public schools, charter LEAs are not subject to the discipline policies and procedures in D.C. municipal regulations. As shown in figure 1, charter schools and traditional public schools in D.C. both serve a largely Black population. In the 2013-14 school year, 80 percent of the students in charter schools were Black, compared to 67 percent in traditional public schools in the District. Both charter and traditional public schools in D.C. serve much higher percentages of Black students compared to schools nationally, reflecting D.C.’s large Black population. With respect to students with disabilities, D.C. charter schools serve slightly lower percentages of these students than D.C. traditional public schools, but D.C. charter schools and D.C. traditional public schools both serve slightly higher percentages of these students than their national counterparts (see fig. 1). The Individuals with Disabilities Education Act (IDEA) contains specific procedures that govern the discipline of IDEA- eligible students with disabilities. In 2016, Education issued two pieces of “significant guidance” relevant to these students. The first emphasizes the importance of using IDEA’s individualized education program (IEP) and placement provisions to provide needed positive behavioral interventions and supports and other strategies to address the behavior of a student whose behavior impedes his or her learning or that of others. The guidance explains that these supports are especially important in light of research showing the detrimental effects of disciplinary suspensions, both short- and long-term, on students with disabilities. The second guidance document emphasizes that charter schools have the same obligation as other public schools to provide IDEA-eligible students with these supports, as well as all other protections under the law. In addition, Education noted in this guidance that it expects that a charter school authorizer will be able to ensure that any charter school that it authorizes complies with the terms of its charter, as well as applicable federal and state laws, including IDEA and other civil rights laws. Examples of covered disabilities under IDEA include intellectual disabilities, hearing or visual impairments, emotional disturbance, autism, and specific learning disabilities. In the District, traditional public schools and charter schools have different oversight structures. The Chancellor of D.C. Public Schools oversees the traditional public schools, which operate as a single LEA. In contrast, each charter school or group of charter schools operates as its own LEA. The District of Columbia School Reform Act of 1995 (School Reform Act) established PCSB, an independent agency which provides the primary oversight of D.C. charter schools. However, as in the states, general oversight of federal education program funding requirements—including requirements for serving students with disabilities and those related to federal civil rights—is the responsibility of the state educational agency, which in D.C. is OSSE. In addition, other D.C. education agencies are to coordinate with PCSB and interact with charter schools in various ways (see fig. 2). PCSB, as the sole chartering authority in D.C., has the power to approve, oversee, renew, and revoke charters. (See app. III for a list of PCSB’s responsibilities.) PCSB reviews applications for new charters, as described in appendix IV, and then is responsible for monitoring charter schools’ academic achievement, operations, and compliance with applicable laws. PCSB is also required to submit an annual report that includes information on charter renewals, revocations, and other actions related to public charter schools. (See app. V for more information on PCSB’s annual reporting.) The School Reform Act allows PCSB to grant up to 10 charters per year. Each charter remains in force for 15 years. After 15 years in operation, if a school desires to renew its charter, it is required to submit a renewal application requesting to renew its charter for another 15-year term. Charters may be renewed an unlimited number of times. PCSB is also required to review each charter at least once every 5 years to determine whether the charter should be revoked. PCSB itself is comprised of seven unpaid board members who are appointed by the Mayor, with the advice and consent of the D.C. Council, and who are to be selected so that knowledge of specific areas related to charter schools is represented on the board. In addition, there are 37 employees who implement the board’s policies and oversee charter schools. PCSB’s main source of revenue is administrative fees from charter schools, and its main expenditures are for its personnel and for other costs related to its monitoring activities (see app. VI for more information on PCSB’s revenues and expenditures). Students who are suspended or expelled from any public school have certain rights. These rights are derived from a number of sources, including state and federal constitutional and statutory law and court decisions interpreting them. For instance, the U.S. Supreme Court has held that all students facing temporary suspension have interests qualifying for protection under the Due Process Clause of the U.S. Constitution and that due process requires, in connection with a suspension of 10 days or less, that a student be given oral or written notice of the charges against them and an explanation of the evidence the authorities have and an opportunity to present the student’s side of the story. Further, students with disabilities under IDEA have specific rights afforded under that statute. In particular, if a school proposes suspending a student served under IDEA for more than 10 days, the LEA, the student’s parents, and relevant members of a child’s IEP team must conduct a review to determine whether the behavior in question is a manifestation of the student’s disability. If so, the suspension cannot proceed. Students with disabilities also have rights to educational services while suspended. District of Columbia Charter School Discipline at a Glance Discipline rates for charter schools overall dropped from school year 2011-12 through school year 2013-14 according to federal data, and continued to drop through school year 2015-16, according to D.C. data. According to federal data, in school year 2013-14: o Both charter and traditional public schools in D.C. had suspension rates that were about double the rates for schools nationally. o D.C. charter school suspension rates were slightly higher than D.C. traditional public school suspension rates overall. o Discipline rates remain disproportionately high for Black students and students with disabilities. Rates for individual charter schools varied widely in 2015-16, according to D.C. data. Discipline rates—that is, out-of-school suspensions and expulsions—at D.C. charter schools dropped from school year 2011-12 through school year 2013-14, but remained disproportionately high for Black students and students with disabilities, as well as at some schools. The overall suspension rate for K-12 students dropped from 16.4 percent of all students to 13.4 percent, a 3 percentage point drop from school years 2011-12 to 2013-14, the most recent years for which national data are available (see fig. 3). The number of students suspended similarly dropped from 4,465 to 3,980 students over that same period. D.C.’s own data, which is collected annually and is more recent, also indicated that suspension rates for D.C. charters schools dropped from school year 2012-13 through school year 2015-16. (See app. VII for PCSB’s data on D.C. charter school discipline rates for school years 2012-13 through 2015-16). Expulsions for K-12 students were also down, with 188 students expelled in 2011-12 (a rate of 0.7 percent) compared to 133 students expelled in D.C. charter schools in 2013-14 (a rate of 0.4 percent), according to Education’s data (see fig.3). D.C.’s data similarly show expulsion rates for D.C. charter schools dropping over the 4-year period from 2012-13 through 2015-16 (see app. VII). Expulsions for pre-K students remained very low—there were zero pre-K expulsions in 2013-14 compared to two expulsions in 2011-12—an expulsion rate of less than .01 percent. In both 2011-12 and 2013-14 both D.C. charter schools and traditional public schools had suspension rates that were about double the rates for schools nationally, according to Education’s data (see fig. 4). For example, in 2013-14, D.C charter schools had about a 13 percent suspension rate, while the national rate for all charter schools was about 6 percent. This was also true for expulsions, with charter schools in D.C. reporting double the rate of charter schools nationally. Within D.C., charter schools’ suspension rates were slightly higher than D.C. traditional public schools. In the same year, D.C. charter schools expelled 133 K-12 students (a rate of 0.4 percent). D.C. charter school students who are expelled are not permitted to return to their charter school. They typically return to their traditional public school for the remainder of the school year but may re-enter the D.C. school lottery for a different charter school the next year. In contrast, D.C. traditional public schools generally do not expel students. Instead, D.C. traditional public schools generally use long-term suspensions (greater than 11 days) and temporarily transfer these students to an alternative middle and high school. Further, for both charter and traditional public schools in D.C., some stakeholders we spoke with had concerns about schools removing students from school without issuing them formal suspensions—a practice they said occurs in some schools. Both the Ombudsman for Public Education and officials from a legal advocacy group for children told us they had worked on cases in which students were sent home for part or all of a school day for behavior-related reasons without being formally suspended. The three D.C. charter schools we visited all engaged in these practices to some extent. For example, at one of the three schools, officials said a student may be asked to stay home, but not formally suspended, while the school investigates a behavior incident. Such full-day removals from school should be reported as suspensions under D.C. law, which defines out-of-school suspension as removing a student from school for disciplinary reasons for 1 school day or longer. At this school and the two others we visited, officials also said that when a behavioral incident occurs, they may send a student home for the remainder of the school day without issuing a formal suspension. Based on D.C.’s legal definition of suspension and PCSB reporting requirements, these partial day removals would only be tracked in the D.C. data if the student had a disability and was sent home for at least half of the school day. PCSB officials told us they require charter schools to follow the law in reporting suspensions and have discouraged schools from using partial day removals as a way to avoid formal suspensions. Frequent partial or full day removals from school can contribute to a significant amount of missed instruction time that is currently not fully captured, tracked, or monitored by PCSB or other D.C. education agencies, despite their stated goals of using data to reduce exclusionary discipline practices. Although suspension and expulsion rates at D.C. charter schools have dropped overall and across most student groups, rates varied widely among groups of students and among individual D.C. charter schools. Specifically, Black students and students with disabilities were disproportionately suspended and expelled from D.C. charter schools, according to Education’s 2013-14 data. As shown in figure 5, although Black students represented 80 percent of charter school enrollment, they represented 93 percent of those suspended and 92 percent of those expelled. Black boys, who represented 39 percent of enrolled students, were 56 percent of those suspended and 55 percent of those expelled over this period (not shown). Similarly, students with disabilities comprised 12 percent of D.C. charter school enrollment but represented 20 percent of those suspended and 28 percent of those expelled. Our analysis also found that the rates of suspension for Black students in D.C. charter schools were about six times higher than the rates for White students and the rates for students with disabilities were almost double the rates for students without disabilities, as shown in figure 6. In addition, male students in D.C. charter schools had suspension rates that were approximately 65 percent higher and expulsion rates that were two times higher than female students (not shown). The pattern of higher rates of discipline for Black students, students with disabilities (as shown in fig. 7), and male students (not shown) also occurred in D.C. traditional public schools, as well as charter and traditional public schools nationally. Further, our regression model found an association between some student characteristics and a higher incidence of suspensions. Specifically, schools that served upper grades (grade 6 and up), or served higher percentages of Black students or English Learners were associated with higher rates of suspensions. This effect existed for both types of public schools in D.C. but was larger for traditional public schools than charter schools. (See app. I for a full discussion of the regression analysis and app. II for the results.) When we looked at suspensions and expulsions for individual D.C. charter schools for school year 2015-16, we found wide variation in rates. In particular, 16 of the 105 D.C. charter schools suspended 20 percent or more of their students, with 5 schools suspending 30 percent or more of their students over the course of that school year. With respect to expulsions, 6 charter schools expelled more than 1 percent of their students, and these 6 schools accounted for over half of all charter school expulsions. (See fig. 8; see app. VII for a full list of D.C. charter schools and their school year 2015-16 discipline rates.) The schools with the highest suspension rates tended to serve middle school students (grades 5-8), while the schools with the highest expulsion rates varied in the grades they served. PCSB officials said that the D.C. charter schools with the highest rates either served high percentages of at-risk students or had strict discipline policies. According to these officials, some of these charter schools serve a high percentage of students with risk factors associated with behavioral problems—such as being formerly incarcerated or expelled—and these schools may struggle to manage their behavior while maintaining a safe school environment. One charter school we visited fell into this category and officials told us that their rates were high because they served many students who had been encouraged to leave their previous schools because of bad behavior. In addition, charter school officials in all three schools we visited said that managing the behavioral issues of some students with disabilities was one of the key discipline challenges they faced. Officials at these schools also said that many of their students have experienced trauma, which can manifest as behavior issues in the classroom. All of these schools had hired or planned to hire more mental health experts to better address these issues. However, officials at two schools said they have had challenges obtaining additional mental health resources and added that their staff could benefit from further training on working with traumatized students. With respect to school discipline policies, PCSB officials also said that many of the schools with high discipline rates are part of networks with reputations for strict policies. For example, they said that one network started with a “no excuses” discipline philosophy that encouraged punishment for minor offenses, although their approach to discipline is now changing. PCSB officials described another network that runs D.C. charter schools with high suspension rates as having an “elaborate” behavior management system, which uses 1-day suspensions as an anchor of their discipline system. The network does not see their high suspension rate as a problem because their policy does not keep students out of school for a long time, and, according to school officials, helps correct student behavior. PCSB officials said they conducted an analysis which found no correlation between 1-day suspensions and withdrawal rates. However, several other stakeholders we interviewed told us that some parents have withdrawn their children from so-called “no excuses” charter schools out of frustration because of the multiple suspensions their child received. Further, the Ombudsman for Public Education said that her office had heard from some charter school parents who were frustrated with such discipline practices, but felt they had no option but to keep their child at the school for the remainder of the school year because the school lottery had closed. According to officials we interviewed, charter schools have made a concerted effort to reduce discipline incidents but continue to face challenges. According to PCSB and other stakeholders, most D.C. charter schools have been motivated to address discipline issues in their schools. At the three charter schools we visited, all of the officials said they took steps to reduce their suspension rates and create a more positive environment to reduce behavior problems. For example, one charter school official described an approach that incorporates empathy and problem solving skills to address discipline, while keeping the student in school. This school is part of an OSSE pilot program in which five D.C. traditional public and charter schools receive on-site technical assistance to implement such practices. An official at a second charter school said that they were using interventions and supports that emphasize positive behaviors to reduce incidences of discipline. (See text box.) An official from the third school said that they have implemented an alternative to in- school suspensions when a student is disruptive, giving the student an opportunity to reflect and continue classwork in a separate environment. D.C. data for school year 2015-16 showed that discipline rates in these three schools had declined from 2014-15, although suspension rates at all three remained above the public charter school average, with one school’s suspension rate remaining above 30 percent. Alternatives to Exclusionary Discipline Restorative Justice Practices: An alternative disciplinary approach which uses non-punitive disciplinary responses that focus on repairing harm done to relationships and people. The aim is to teach students empathy and problem solving skills that can help prevent the occurrence of inappropriate behavior in the future. For example, officials at one school we interviewed described asking a student who stole a laptop to “restore” his community by writing a reflection paper, as well as attend Saturday school, instead of being suspended. Positive Behavior Intervention and Supports: A schoolwide framework, which focuses on positive behavioral expectations. By teaching students what to do instead of what not to do, the school can focus on the preferred behaviors. At one school implementing this practice, officials told us they instruct teachers to note three positive behaviors for every negative behavior, for each student. School officials told us that implementing changes to their discipline practices and creating a more positive environment is time and resource intensive and that full implementation would take several years. In implementing these changes, schools officials told us they faced resistance from both staff and parents. Some teachers may not fully adhere to these new practices, finding it easier to remove students from class when they are misbehaving, according to charter school officials. Officials at two schools said they had recently hired new principals to more effectively implement their new discipline philosophies, and all three of the schools had hired more staff to focus on school climate issues. In addition, school officials told us that some parents protested the changes, preferring a strict discipline culture that they perceive as keeping their children safe. PCSB has increased its focus on school discipline in recent years and uses several mechanisms to oversee charter schools’ use of suspensions and expulsions (see fig. 9). Specifically, PCSB officials said that in 2012, PCSB began reviewing discipline data it collected from each charter school on a monthly basis. They told us they use the data to focus schools’ attention on suspension and expulsion rates and encourage schools to address high rates. In these monthly reviews, PCSB officials said they examine year-to-date suspension and expulsion rate averages, including averages by grade band (pre-K, elementary, etc.), and also identify outlier schools that have the highest suspension and expulsion rates, highest number of days students were suspended, and highest suspension rates for students with disabilities. PCSB officials said they communicate with schools regularly about the patterns they see in their discipline data and that they request meetings with charter school officials of outlier charter schools to discuss their schools’ rates and how they compare to other charter schools. PCSB officials told us they use this approach because charter school officials will usually choose to make changes when they are provided with this information. Officials from the three schools we interviewed said that PCSB has generally been active in sharing information and data, highlighting issues, and encouraging schools to reduce suspension and expulsion rates. Further, PCSB has offered training and professional development opportunities to charter school officials on topics related to school discipline, including conferences on classroom management and multiple quarterly meetings for school officials devoted to the topic. PCSB and OSSE work together to annually publish discipline data by school in Equity Reports, which are reports that PCSB officials said drive schools to lower their suspension and expulsion rates. The Equity Reports are also meant to provide school leadership, school boards, families, and the community with information that will allow them to compare data on both charter schools and traditional public schools in D.C. See figure 10 for an excerpt from one charter school’s 2014-15 Equity Report. PCSB officials told us that they also review schools’ discipline policies during the charter application and renewal processes. These officials said that they use the application process to shape new charter schools’ discipline policies. According to PCSB officials and application guidance, PCSB is unlikely to approve an application whose discipline policy will result in frequent removal of students from the school (see text box). PCSB officials said that this process is their opportunity to ensure that charter school policies limit the use of suspensions and expulsions. For example, in a May 2015 letter explaining the reasons for denying a new charter school application, PCSB noted that the “demanding behavioral program may result in high percentages of students being suspended or expelled and the founding team has not developed realistic supports to meet the needs of all learners. When asked about how the school will support students who struggle with strict behavior expectations, the founding group…did not provide a cohesive and deliberate approach.” Excerpts from PCSB Policy Documents “Discipline plans that provide for expulsion for minor offenses such as possession of tobacco or insubordination will not be approved.” –PCSB Discipline Plan Policy “PCSB is unlikely to approve applications for schools with discipline policies that rely on school exclusion to manage student behavior and/or that are likely to result in high rates of suspensions and expulsions.” –PCSB 2016 Charter Application Guidelines “PCSB expects that schools will only expel students for federally- recognized reasons.” –PCSB 2016 Charter Application Guidelines With respect to charter renewals, which occur every 15 years, PCSB officials said that recently they have begun to use this process to, among other things, renegotiate parts of schools’ discipline policies. Officials said that if PCSB and the charter school board fail to reach agreement, the school’s funding will cease, which according to PCSB officials provides a strong incentive for charter schools to comply. In addition, PCSB conducts higher-level reviews of schools’ discipline policies on an annual basis. Officials told us that these reviews are meant to confirm that schools’ discipline policies include three key elements: due process and appeals procedures, clearly outlined reasons for suspensions and expulsions, and adherence to federal protections for students with disabilities in the discipline process. (See text box below for the full list of discipline policy elements PCSB requires of charter schools.) If a school’s policy does not include one or more of these elements, PCSB officials said they will give the school 2 weeks to revise the policy and, if the school fails to fix the issue, PCSB will send a “notice of concern”— a formal, written notification alerting a school of issues that need to be addressed. If the school still fails to fix the issue, PCSB will send a charter warning letter indicating that the charter could be subject to revocation. PCSB officials told us they have never had to send a warning letter for issues related to discipline policies. Required Elements of Charter School Discipline Policies Parent, student, and staff rights and responsibilities; Clear explanation of infractions, what specific acts are not tolerated in the school, tiered consequences and interventions, and a clearly outlined basis for suspensions and expulsions; Due process and appeals procedures; Provisions to ensure that all rules are enforceable and applied consistently by all staff; and All Individuals with Disabilities Education Act (IDEA) guidelines and requirements, which concern services for students with disabilities. Finally, PCSB also monitors parent and stakeholder complaints. PCSB officials told us that if they notice a trend or pattern in these complaints they will follow up with schools. They said that they have never had a pattern of complaints against a school related to suspensions or expulsions. While other D.C. education agencies also have oversight responsibility with respect to charter schools (see fig. 2), plans to further bring down discipline rates have been hampered by agencies’ lack of consensus regarding roles and responsibilities, and by one agency’s stated lack of clarity around its own authority. In interviews, the Deputy Mayor for Education (DME), OSSE, and PCSB officials described different views regarding agency roles in overseeing charter schools and providing guidance and training. In particular, officials described differing views on the appropriate scope of PCSB’s role with respect to charter schools. For example, the DME—whose role is to oversee District-wide education strategy—said that PCSB could issue further guidance on certain discipline-related topics and place additional requirements on schools. In contrast, officials from PCSB—the entity charged with overseeing charter schools—told us that providing additional or more specific guidance would be inconsistent with their role as authorizer. PCSB officials said that they interpret certain provisions of the School Reform Act as providing “a strong legal bulwark against the District government, including , mandating school disciplinary processes,” thereby limiting the actions that D.C. agencies, including PCSB, may take. In addition, officials from these three agencies differed in their views regarding charter schools’ needs with respect to discipline, including whether charter schools needed additional guidance on due process procedures, training, or other resources. Further, OSSE officials said that the agency’s current view of its authority to regulate charter schools on discipline differs from previous administrations’ interests in that area, and that they still lacked clarity on their authority in some areas. Specifically, in a 2014 report, OSSE—the agency with general oversight of federal education funding requirements—stated its intent to issue regulations applying to both D.C. traditional public and charter schools that would address high discipline rates. In the report, OSSE said that this effort would address potential discipline disparities across D.C. charter and traditional public schools and help ensure that all public school students in the District are treated fairly. However, OSSE never issued the regulations, and OSSE officials told us in 2016 that, in contrast with previous administrations’ interests, the current administration does not interpret the School Reform Act as providing them with clear authority to issue such regulations. OSSE officials also stated that the complexity of the D.C. regulatory framework, combined with the fact that some regulations were promulgated prior to the creation of D.C. charter schools, resulted in a lack of clarity around their oversight authority over charter schools in some areas. OSSE did, however, issue non-regulatory guidance in June 2016, which provides high-level descriptions of federal and D.C. laws relating to school discipline and cites some leading practices. It is unclear whether this guidance will lead to any changes in charter schools’ discipline rates. Subsequent to releasing this guidance, OSSE released a new report in 2016, concluding that further progress is still needed on discipline policy, implementation, and disproportionality across all D.C. public schools. Despite these challenges, officials from these three agencies and D.C. traditional public schools have collaborated together on a key effort to address discipline rates by publishing the Equity Reports for each charter and traditional public school in the District. In addition, officials told us that the other key D.C. education agencies reviewed OSSE’s draft non- regulatory guidance on discipline, and that officials from these agencies also work together along with officials from other D.C. agencies that support families and young people through regular meetings convened by the DME. OSSE and PCSB officials said that they also promote and support each other’s training programs on classroom management and discipline. Further, they work together on education-related issues as participants on a number of city-wide task forces and other collaborative efforts. However, while some of those task forces focus on issues related to discipline, such as bullying or truancy, none specifically address discipline rates or disparities in a comprehensive manner. Leading practices on interagency collaboration state that to achieve a common outcome, agencies should agree on roles and responsibilities and create mutually reinforcing or joint strategies that align the agencies’ activities, processes, and resources. Similarly, standards for internal control state that to achieve an entity’s objectives management should establish an organizational structure and assign responsibilities. The agencies differing views on roles and responsibilities, and OSSE’s stated lack of clarity on its authority around the issue of discipline in charter schools makes it difficult for them to leverage resources and the collective expertise of other agencies in the District to develop plans and strategies to address D.C.’s high discipline rates. Absent such a plan, as well as explicitly stated roles and responsibilities, charter schools may face challenges in continuing to bringing down rates. PCSB and the District’s charter schools have made notable progress in bringing down discipline rates in recent years. However, rates remain troublingly high, at twice the national rate for school year 2013-14—the most recent year for which nationally comparable data are available—and particularly for certain schools and for Black students and students with disabilities. PCSB has taken steps to address this issue by using school- level discipline data to focus schools’ attention on reducing reliance on those practices that remove students from school. However, some schools are removing students from school for partial or even full school days without fully reflecting these actions in the discipline data or consistently documenting them. As a result, PCSB does not have a clear sense of how widely these practices are used or what strategies might help it best address the problem. PCSB, the DME, and OSSE all play key roles in charter school oversight. While these agencies communicate regularly and have worked together in a number of areas, including making data on school discipline across all District schools more available through the Equity Reports, we observed a lack of consensus around their roles and responsibilities, and OSSE’s view of its authority to regulate charter schools on discipline differs from previous administrations’ interests in that area. This has contributed to inertia around creating and implementing a coordinated plan that could help further address high discipline rates. Absent such a plan, continued progress in bringing down discipline rates may be slowed. 1. PCSB should further explore ways to more accurately measure behavior-related time out of school—both partial and full day removals—not captured under current reporting procedures. 2. The D.C. Mayor should direct the DME and OSSE to deepen collaboration with PCSB and other relevant stakeholders, such as charter school LEAs, to develop a coordinated plan to continue progress in reducing discipline rates and, as part of this process, make explicit their respective roles, responsibilities, and authorities with regard to discipline in D.C. charter schools. This plan could include developing additional guidance, training, or resources, consistent with the unique autonomy of charter schools. We provided a draft of this report to PCSB, the D.C. Mayor’s Office, and Education for review and comment. PCSB’s written comments, which also include technical comments, are reproduced in appendix VIII. OSSE provided written comments on behalf of the D.C. Mayor’s Office, which are reproduced in appendix IX. Education and the D.C. State Board of Education provided technical comments on the report. In each case, we incorporated their comments into the report, as appropriate. In its written comments, PCSB said that by not focusing our analysis on D.C. data, we reached inaccurate conclusions. Specifically, it stated that by focusing on CRDC data, which are most recently available for school years 2011-12 and 2013-14, the report failed to acknowledge the more recent reductions in D.C. charter schools’ suspension and expulsion rates shown in D.C.’s own data for school years 2014-15 and 2015-16. PCSB also commented on our analysis that used CRDC data to make comparisons between D.C. charter school rates to those of charter schools nationally. These data showed that suspension rates at D.C charter schools were double the national rates for school years 2011-12 and 2013-14. In its comments PCSB presented a table with its own data for all 4 of these school years and stated that its data show that D.C. charter schools’ discipline rates have moved closer to national rates. PCSB asked that our report prominently incorporate D.C.’s 2014-15 and 2015-16 data throughout. As stated in our draft report, the most recent available PCSB data at the time we did our work was for school year 2014-15, which we presented in selected analyses where appropriate throughout the draft report. We have updated these analyses with PCSB’s recently available 2015-16 data. PCSB’s 2015-16 data continue to show modest declines in D.C. charter school discipline rates, compared to previous years of PCSB data. However, as stated in our draft report, PCSB’s data are not comparable to other states’ data collected by CRDC. Further, because school year 2013-14 is the most recent year for which national comparable CRDC data are available, it is not possible to know whether D.C. charter school rates have moved closer to national rates, which may have also changed since 2013-14. PCSB agreed that D.C. charter schools’ discipline rates remain higher than PCSB would like, and that they remain disproportionate with respect to race and disability status. PCSB also stated that steady progress seen in D.C. charter schools is the right way to reduce discipline. We applaud PCSB’s efforts in steadily bringing down rates, as noted in the draft report, and continue to believe that a coordinated multi-agency plan is needed to continue this progress. In addition, PCSB said in its comments that our draft report failed to acknowledge the autonomy granted to D.C. charter schools under the School Reform Act, which it interprets as preventing any D.C. agency from mandating charter school disciplinary processes. We believe that the report clearly states that each charter LEA has the autonomy to establish its own discipline policies and suspend and expel students, and that unlike D.C. traditional public schools, charter LEAs are not subject to the discipline policies and procedures in D.C. municipal regulations. However, as also stated in the draft report, PCSB exercised its authority by putting some requirements and oversight mechanisms in place for charter schools, including regularly reviewing charter schools’ discipline data and policies. Moreover, Education’s recent guidance highlights its expectation that charter school authorizers ensure that the schools they authorize comply with federal and state laws, including those pertaining to the discipline of students with disabilities. PCSB did not comment on our first recommendation (that PCSB further explore ways to more accurately measure behavior-related time out of school—both partial and full day removals—not captured under current reporting procedures). However, PCSB said that a related statement in the report—that it has no official policy on partial day removals of students for disciplinary reasons—was erroneous. We have removed this statement in the final report. PCSB and OSSE both disagreed with our characterization of their collaboration around charter school discipline, but both indicated that they look forward to deepening their collaboration to continue progress made in reducing discipline rates. We have added additional information to our report to more fully reflect new information both entities provided in their comments regarding the level of collaboration between these two entities. Finally, OSSE, in commenting on the report, agreed that there is some ambiguity around its authority with respect to D.C. charter schools. Specifically, OSSE stated that the complex D.C. regulatory framework is unclear regarding oversight authority in some instances. As such, the agency’s current view of its authority to regulate charter schools on discipline differs from previous administrations’ interests in that area. Specifically, OSSE stated that its current conclusion is that the D.C. code does not provide OSSE clear authority to regulate charter schools with respect to discipline. Such views about D.C.’s regulatory framework are an example of the importance of clarifying agency roles and responsibilities with respect to D.C. charter school discipline. In light of PCSB’s and OSSE’s comments around collaboration and their respective authorities around discipline, we modified our second recommendation slightly. We now specify that these agencies should deepen their collaboration in order to continue progress in reducing discipline rates, and that in doing so they should make explicit their respective oversight authorities, in addition to roles and responsibilities. We also specify that the multi-agency plan to continue progress reducing discipline rates should be consistent with the unique autonomy of charter schools. We are sending copies of this report to the D.C. Mayor, the Chairman and Executive Director of the Public Charter School Board, and the U.S. Secretary of Education. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff should have any questions about this report, please contact me at 617-788-0580 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix X. The objectives of this study were to examine: (1) what is known about suspensions and expulsions in District of Columbia (D.C. or District) charter schools, and (2) to what extent the Public Charter School Board (PCSB) oversees the use of suspensions and expulsions at charter schools. To address these objectives, we used a variety of methods, including analyzing federal and D.C. data; reviewing published reports and monitoring documentation from PCSB and other D.C. agencies; and interviewing officials from these agencies, representatives from associations and advocacy groups, and officials from three charter schools. To determine the out-of-school suspension and expulsion rates at charter and traditional public schools, both in D.C. and nationally, we analyzed federal data from the U.S. Department of Education’s (Education) Civil Rights Data Collection (CDRC) for school years 2011-12 and 2013-14, the 2 most recent years available. The CRDC is a comprehensive source of data on suspensions and expulsions that collects comparable data across the nation’s public school districts, schools, and students. As such, we used the CRDC to make comparisons between D.C. charter schools, D.C. traditional public schools, and traditional and charter schools nationally. PCSB also collects information on suspensions and expulsions in D.C. charter schools, but these data are not comparable to CRDC data on schools and students in other states. At the time we did our work PCSB had data that were more recent than data available through the CRDC (school year 2014-15 for PCSB versus 2013-14 for CRDC). We therefore chose to present PCSB’s data in selected analyses in the report, while also being careful not to make comparisons between the PCSB and CRDC data. In its written comments on a draft of this report, PCSB noted the recent availability of data for the 2015-16 school year. We updated our analyses accordingly to provide the most current picture of D.C. charter school discipline rates. Doing so did not materially change the findings in this report. The Civil Rights Data Collection is a biennial survey that is mandatory for every school and district in the United States. Conducted by Education’s Office for Civil Rights, the survey collects data on the nation’s public schools, including student characteristics and enrollment; educational and course offerings; disciplinary actions; and school environment, such as incidences of bullying. From school years 2000 through 2010, the CRDC collected data from a representative sample of schools, but in school years 2011-12 and 2013-14, the CRDC collected data from every public school in the nation (approximately 17,000 school districts, 96,000 schools, and 50 million students in school year 2013-14). The dataset includes traditional public schools (pre-K through 12th grade), alternative schools, magnet schools, and charter schools. For school years 2011-12 and 2013-14, the most recent years of data available, we calculated aggregate discipline rates. To determine the extent to which discipline rates varied by student demographic groups and school type, we calculated aggregate discipline rates by student demographics and for all charter and traditional public schools in D.C. Further, we calculated discipline rates for each school in D.C.—both charter and traditional—to determine the extent of variation in rates by school and school type. We also compared the aggregate rates in D.C. charter and traditional public schools to rates for charter and traditional public schools nationally. We analyzed the following discipline and demographic variables: Total out-of-school suspensions, calculated by combining the CRDC Students receiving only one out-of-school suspension Students receiving more than one out-of-school suspension Total expulsions, calculated by combining the CRDC variables: Expulsions with educational services Expulsions without educational services The CRDC has seven race and ethnicity variables, which we combined into five categories, as shown in table 1. We also analyzed rates for students identified as English Learners and students with a disability. Our analysis of students with disabilities included only those students served under the Individuals with Disabilities Education Act. We excluded Section 504 students because the CRDC does not collect discipline data for Section 504 broken out by race and ethnicity. In school year 2013-14, students only receiving services under Section 504 represented 8 percent of public school students with disabilities in D.C. To analyze the poverty levels of schools with different suspension and expulsion rates, we matched schools in both years of the CRDC with data on free or reduced-price lunch (FRPL) eligibility from the Common Core of Data, which is administered by Education’s National Center for Education Statistics, and which annually collects non-fiscal data about all public schools in the nation. These data are supplied by state education agency officials for their schools and school districts. However, we determined that the school year 2013-14 FRPL data for D.C. were not sufficiently reliable for our purposes. These FRPL data differed dramatically from the school year 2011-12 data and when we asked officials from the Office of the State Superintendent of Education (OSSE), the state educational agency responsible for reporting these data, to corroborate these data, they reported having no confidence in the data they had reported. Therefore, we did not use this FRPL data in any of our analyses. To assess the reliability of the federal data used in this report, we reviewed technical documentation about the survey and dataset and interviewed officials from Education’s Office for Civil Rights about their procedures for checking the data. We also conducted electronic testing and logic checks of our analysis. Based on these efforts, we determined that the CRDC data were sufficiently reliable for our purposes. We used the version of the 2013-14 CRDC data that was publicly available as of September 30, 2016 because it corrected errors in the original data previously submitted by Florida. We also analyzed the data using a generalized linear regression model to determine (1) whether and the extent to which certain school level characteristics are associated with a higher incidence of suspensions and (2) whether and the extent to which an association exists between high incidences of suspension and school type in the District (charter schools versus traditional public schools). For our regression model, we used the CRDC for school year 2013-14, limiting our analysis to suspensions because expulsions are a rare event and therefore difficult to model. We included demographic variables in our model that Education’s Office for Civil Rights has identified as key drivers of suspension. We used these variables in our model as follows: Outcome: number of students with one or more out-of-school Independent variable: Charter school status (Yes/No) Adjustment variables: Percent of student population that is male, Black, students with disabilities, and English Learners; whether school offers upper grades (grades 6 and above) (Yes/No) Some variables that were thought to be important were not included in our model due to estimation or reliability issues. Specifically, we excluded: FRPL in school year 2013-14, due to reliability issues as indicated by OSSE, the agency responsible for reporting the data; percent of students within a school who are Hispanic, due to collinearity with English Learners; and alternative school designation, due to lack of variability and sparseness in data. We used the number of students enrolled as an exposure variable to account for different school sizes. Our analysis included K-12 schools. We excluded pre-Kindergarten (pre-K) schools because pre-K suspensions are rare and reported differently in the data. This resulted in dropping 8 schools that offered only pre-K. Additionally, for schools that offered both pre-K and later grades we excluded out-of-school suspension and student counts for pre-K students. We also excluded 5 magnet schools because they are too dissimilar to the other schools in our model, since students are admitted to such schools based on the merits of their application. With these schools excluded, the 2013-14 CRDC data resulted in 191 D.C. Public Schools in our analysis file, where 105 were traditional and 86 were charter. All models are subject to limitations. For this model, the limitations included: The data we analyzed are at the school level, rather than student level. Ideally, data would be analyzed at the student level in order to describe the association between a charter versus traditional public school student’s suspension rate, controlling for characteristics of the individual students suspended, such as gender, race/ethnicity, and grade level. Instead, the school-level nature of the CRDC data limited what we could ascribe to the association between these schools’ suspension incidence, controlling for the characteristics of the entire school’s population, such as percent of students who are male, Black, etc. Some variables that may be related to out-of-school suspensions are not available in the data. For example, in this context, it could be that parent education or household type (single- versus multiple-headed household) could be related to student behavior, such as those that lead to out-of-school suspensions. These data were not gathered through a randomized control trial in which students would be randomized to attend either a traditional or a charter school. Although there is some randomness inherent in the lottery for oversubscribed charter schools, this is not systematic and, for students who were offered the option to attend charter schools, the students’ families decide whether to accept or not. Typically, a generalized linear regression model provides an estimated incidence rate ratio, where a value greater than 1 indicates a higher or positive association, in this case, between suspensions and the variable of interest, such as being a charter school or having a higher percentage of Black students. An estimated incidence rate ratio less than 1 indicates a lower incidence of suspensions when a factor is present. Given the limitations of our model as described above, in appendix II we present a general summary of association by providing the direction, rather than an estimated rate (incidence) of suspensions of charter versus traditional public schools in the District. To analyze more recent data on D.C. charter schools, we obtained aggregate data from PCSB for school years 2012-13 through 2015-16. In addition, we obtained more detailed school-level data on charter school suspensions and expulsions from PCSB for school years 2013-14, 2014- 15, and 2015-16 which are part of data that the District collects annually on both charter schools and traditional public schools. The data included published reports on discipline from PCSB, and Equity Reports for each school. Equity Reports contain data on: total enrollment; Limited English Proficiency; suspension rate by student subgroup; and overall expulsion rate. In addition, we obtained published reports on discipline, as well as data from OSSE on the numbers of students transferred to an alternate school for students who received long-term suspensions and expulsions in school year 2014-15. To assess the reliability of the D.C. data, we reviewed documentation, interviewed relevant officials from PCSB and OSSE, and conducted logic checks. Based on these efforts, we determined that these data were sufficiently reliable for our purposes. D.C.’s data however, are not comparable to Education’s data because they do not distinguish between pre-K and K-12 rates and because not all schools were captured in Education’s data. To determine the extent to which PCSB oversees suspensions and expulsions at charter schools, we reviewed documentation and guidance from PCSB, as well as federal and District laws and regulations. We also reviewed documentation from other D.C. education agencies that also have a role in overseeing D.C. charter schools and reviewed discipline guidance from the U.S. Departments of Education and Justice. In addition, we interviewed PCSB officials and officials at other D.C. agencies that have oversight of charter schools. These other D.C. agencies were Deputy Mayor for Education; State Board of Education, including the Ombudsman for Public Education and the Chief Student Advocate; and D.C. Office of the Inspector General. We evaluated PCSB’s oversight of charter school discipline against federal standards for internal control for communicating quality information to external parties and establishing structure, responsibility, and authority, and evaluated D.C. education agencies’ collaboration on this issue against leading practices for interagency collaboration. We also reviewed selected research studies that provided further context and insight into school discipline in charter schools. To obtain additional context and insights, we selected and interviewed researchers and officials from advocacy groups and associations with different perspectives on charter schools and discipline. The researchers and officials we interviewed were located at The Center for Civil Rights Remedies at the Civil Rights Project, The Center on Reinventing Public Education, The Children’s Law Center, The Council for Court Excellence, The D.C. Association of Chartered Public Schools, D.C. Lawyers for Youth, The Dignity in Schools Campaign, and The National Association of Charter School Authorizers. In order to obtain the views of charter school officials with diverse perspectives on discipline policies and practices, reasons for high discipline rates, and experiences with PCSB oversight, we interviewed school and local educational agency (LEA) officials at three D.C. charter schools. We selected two schools that had high suspension and/or expulsion rates in school year 2014-15, as well as one school with formerly high discipline rates, according to D.C. data. In making our selections, we also took into consideration: the number of LEA campuses in D.C., to get perspectives from large and small charter school networks; grade levels served, because both research and stakeholders indicated that discipline rates are higher for middle and high school students than elementary school students; school location and demographics, to ensure that we spoke to schools serving similar populations of students; and rank in the 2014 Performance Management Framework (PMF), when available, to get perspectives from higher and lower performing schools. The PMF is PCSB’s rating system to measure school quality and includes three tiers. The highest performing schools are ranked as Tier 1, while the lowest are ranked Tier 3. We conducted two interviews each for the three schools: we interviewed LEA staff from the school’s central office and school-based staff including the principal and other administrators with responsibility for implementing discipline policy. We asked officials to describe their school’s discipline policies and practices, how and why they have changed over the years, and discipline challenges they are facing. We also reviewed their discipline policies in their most recent student handbook, as well as other relevant documentation, such as annual reports, renewal reports, and Equity Reports which capture schools’ discipline data. Because we selected the schools judgmentally, we cannot generalize our findings about their policies, practices, and challenges. We conducted this performance audit from November 2015 to February 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Using the school year 2013-14 CRDC data, we conducted a generalized linear regression model examining the association between District of Columbia (D.C.) schools’ out-of-school suspensions and various school– level characteristics. For further discussion of our methodology for this analysis, see appendix I. Our regression model found an association between certain school demographic characteristics and suspension, regardless of type of school (charter versus traditional public schools). Specifically, serving the upper grades (grades 6 and up) or having higher percentages of Black students or English Learners were associated with a higher incidence of suspensions. Further, our model showed that D.C. charter schools overall were associated with a higher incidence of suspensions than D.C traditional public schools. However, our model also examined the interactions between school type and school demographic variables and found that the association between school type and suspension rate varied across several demographic variables. Specifically, while serving upper grades, higher percentages of Black students, or higher percentages of English Learners is generally associated with a higher incidence of suspensions, this effect was smaller for charter schools than for traditional public schools. These relationships are shown in table 3, which presents coefficients from our model, where positive means that a particular variable was significantly associated with an increase in the suspension rate at the 0.05 level and negative indicates a decrease in the suspension rate. Insignificant indicates the variable is not significantly associated with suspensions at the 0.05 level. Our model did not find an association between gender or the percentage of students with disabilities in a school and increased suspension rates. The absence of an association here may be due to the way that federal data is collected and reported at the school, rather than student, level. The District of Columbia School Reform Act of 1995 (School Reform Act) established the Public Charter School Board (PCSB) as an eligible chartering authority with specific powers and duties. Under the School Reform Act, PCSB has specific responsibilities with regard to reviewing petitions (applications) for new charters, monitoring charter school operations, reviewing charter renewal applications, and revoking charters. Table 4 provides the detailed requirements for these activities as specified in the School Reform Act. As required by the School Reform Act, PCSB must issue annual reports and financial statement audits. (See app. V for information on PCSB’s 2016 annual report.) The Public Charter School Board (PCSB) reviews all applications for new charter schools in the District of Columbia (D.C.), which can be submitted by parents, educators, nonprofit organizations, or other groups. With some exceptions, applicants must generally adhere to the same guidance and must meet PCSB’s standards for approval. (See table 5.) PCSB provides application instructions and sample documents on the agency’s website. Once PCSB receives an application, the review process generally takes 3 months. (For example, see table 6 for PCSB’s fall 2016 charter application timeline.) PCSB’s charter school application review is a four-part process including written applications, site visits (if applicable), interviews, and public hearings (see table 7). Following the application review, PCSB votes on each charter application at a public meeting. Applications for a charter school follow a standard format and are required to include specific elements. (See text box.) In addition to the written applications, site visits, interviews, and public hearings, PCSB evaluates charter school applicants against established criteria (see table 8). PCSB may approve any application if it determines that the application (1) meets the legal requirements; (2) agrees to any condition or requirement set forth by the authorizer; and (3) has the ability to meet the educational objectives outlined in the application. If PCSB does not approve an application, it must provide written notice to the applicant explaining why the application was not approved. Based on all components of the application process, the PCSB Board votes on each charter school application at a public meeting. There are three possible outcomes for an application: 1. Full approval: Applicant has met all of the requirements. 2. Conditional approval/approval with conditions: Applicant is approved pending satisfaction of all requirements, wherein they are determined to have Full Approval. 3. Denial: Applicant does not meet all of the requirements and no further consideration is given to the application. Such applicants may address the shortcomings and reapply in a future cycle, though not in the same 12-month period. A list of the dates and places of each meeting of PCSB during the year preceding the report. December 14, 2015 January 27, 2016 February 10, 2016 (Special Meeting) The number of petitions received for the conversion of an existing school to a public charter school and for the creation of a new charter school. The number of petitions that were approved and the number that were denied. Summary of the reasons for which such petitions were denied. Four public charter school proposals received: 1. Sustainable Futures 2. Interactive Academy 3. Pathways in Education 4. The Adult Career Technical Education One approved—Sustainable Futures One denied—Interactive Academy Two withdrawn—Pathways in Education and The Adult Career Technical Education. The Board denied the application of Interactive Academy for three reasons: (1) capacity of the founding group; (2) insufficient development of the plan for supporting students with disabilities; and (3) insufficient evidence of the success of the founding group in driving academic achievement. Annual Reporting Requirements A description of any new charters issued by PCSB during the year preceding the report. A description of any charters renewed by PCSB during the year preceding the report. Content Shown in Report Sustainable Futures will serve 131 students in its first year, growing to no more than 288 students by its third year of operation. The school seeks to serve disconnected youth in the District and will offer project-based learning, along with a competency- based approach to allow students to move through the curriculum at their own pace. The school will also offer social-emotional supports (e.g., mental health services) and wraparound services (e.g., on-site health clinic, transportation assistance, and three meals per day). Two charters renewed: 1. KIPP DC PCS 2. Thurgood Marshall Academy PCS Four charters reviewed: 1. Inspired Teaching Demonstration PCS 2. Imagine Hope Community PCS 3. Washington Latin PCS 4. The Next Step PCS. Potomac Preparatory PCS charter revoked for poor academic performance. A description of any charters revoked by PCSB during the year preceding the report. A description of any charters refused renewal by PCSB during the year preceding the report. Any recommendations concerning ways to improve the administration of public charter schools. No recommendations found. Grants includes both federal and private grants. PCSB has not received any federal grant funds since FY 2014. Other revenues include school closure funds and sponsorship income, among other sources. Other expenditures include facilities costs, community events, w ebsite costs, and other overhead expenses. Appendix VII: Public Charter School Board’s Discipline and Demographic Data Suspension rate (percent) In addition to the contact named above, Sherri Doughty (Assistant Director), Lauren Gilbertson (Analyst-in-Charge), Melinda Bowman, Jean McSween, John Mingus, James Rebbe, Alexandra Squitieri, and Sonya Vartivarian made key contributions to this report. Also contributing to this report were Deborah Bland, Grace Cho, Sara Daleski, Holly Dye, Lauren Kirkpatrick, Sheila R. McCoy, Mimi Nguyen, Sara Pelton, and Ronni Schwartz.
D.C. charter schools served about 45 percent of D.C.'s public school students in the 2015-16 school year. The District of Columbia School Reform Act of 1995 established PCSB to authorize and oversee charter schools. PCSB also oversees charter schools' use of suspensions and expulsions. The District of Columbia Appropriations Act, 2005, as amended, included a provision for GAO to conduct a periodic management evaluation of PCSB. This report examines (1) what is known about suspensions and expulsions in D.C. charter schools, and (2) to what extent PCSB oversees charter schools' use of suspensions and expulsions. GAO analyzed the most recent national federal data (school years 2011-12 and 2013-14) and D.C. data (school year 2015-16) on suspensions and expulsions; reviewed relevant laws, regulations, and agency policies and documentation; and interviewed officials at PCSB and other D.C. agencies, as well as other stakeholders selected to provide a range of perspectives. GAO also visited three charter schools that had high discipline rates. Discipline rates (out-of-school suspension and expulsion rates) at District of Columbia (D.C.) charter schools dropped from school years 2011-12 through 2013-14 (the most recent years of national Department of Education data available). However, these rates remained about double the rates of charter schools nationally and slightly higher than D.C. traditional public schools and were also disproportionately high for some student groups and schools. Specifically, during this period, suspension rates in D.C. charter schools dropped from about 16 percent of all students to about 13 percent, and expulsions, which were relatively rare, went down by about a half percent, according to GAO's analysis. However, D.C. Black students and students with disabilities were disproportionately suspended and expelled. For example, Black students represented 80 percent of students in D.C. charter schools, but 93 percent of those suspended and 92 percent of those expelled. Further, 16 of D.C.'s 105 charter schools suspended over a fifth of their students over the course of school year 2015-16, according to D.C. data. Note: Numbers may not add to 100 due to rounding. The Public Charter School Board (PCSB) regularly uses several mechanisms to oversee charter schools' use of suspensions and expulsions. For example, PCSB reviews school-level data and schools' discipline policies to encourage schools to reduce reliance on suspensions and expulsions to manage student behavior. Several D.C. agencies have roles in overseeing charter schools and reported collaborating on other issues, but we observed a lack of consensus around roles and responsibilities regarding charter school discipline. Further, a plan to issue regulations addressing discipline disparities among D.C. public schools was unsuccessful because the D.C. agency that planned to issue the regulations was unsure of its authority to do so. Absent a coordinated plan to continue progress in reducing discipline rates in charter schools, as well as clarified roles, responsibilities, and authorities of D.C. agencies with respect to oversight of discipline in charter schools, continued progress may be slowed. GAO is making two recommendations, including that D.C. education agencies collaborate on a plan to further reduce discipline rates and make explicit agency roles, responsibilities, and authorities regarding charter school discipline. The agencies did not explicitly agree or disagree with our recommendations and indicated they could deepen their collaboration.
FAA’s primary mission is to ensure safe, orderly, and efficient air travel throughout the United States. FAA’s ability to fulfill this mission depends on the adequacy and reliability of the nation’s air traffic control (ATC) system, a vast network of computer hardware, software, and communications equipment. Sustained growth in air traffic and aging equipment has strained the current system, limiting the efficiency of ATC operations. To combat these trends, in 1981 FAA embarked on a multibillion dollar, mission-critical capital investment program aimed at modernizing its aging ATC infrastructure. This modernization program includes over 200 separate projects estimated to cost over $34 billion through the year 2003. It includes the acquisition of new radars and automated data processing, navigation, and communications equipment as well as new facilities and support equipment. As these items are placed in service, FAA is required to report them as property and equipment assets in its financial statements. In addition, related spare parts necessary to support the operation and maintenance of this equipment are reported as operating materials and supplies inventory. In its fiscal year 1996 financial statements, FAA reported assets of $18.2 billion, including approximately $9.2 billion of operating materials and supplies (such as mission-critical spare parts), property and equipment (such as land, buildings, and air traffic control equipment), and work-in-process (which consists of facilities and equipment acquired but not yet put in service). It also reported expenses of $10.1 billion. Problems in the reporting of operating materials and supplies and property and equipment (including work-in-process) were cited by the DOT IG in its audit report on FAA’s fiscal year 1996 financial statement. The IG is responsible for auditing FAA’s financial statements under the Chief Financial Officers Act of 1990, as expanded by the Government Management Reform Act of 1994, to determine whether those financial statements are reliable. The IG audited FAA’s fiscal year 1996 Statement of Financial Position, which reports the agency’s assets and liabilities, but disclaimed (did not express) an opinion primarily because of internal control weaknesses that precluded the IG from determining if FAA’s operating materials and supplies and property and equipment were fairly presented. This is significant since at September 30, 1996, operating materials and supplies and property and equipment represented approximately 51 percent of FAA’s total reported assets. Similar conditions resulted in the IG issuing disclaimers of opinion on FAA’s fiscal years 1993 through 1995 financial statements. Among the more serious deficiencies cited in the IG’s report on FAA’s fiscal year 1996 financial statement were the following: The reported $432 million for operating materials and supplies could not be verified because physical inventory counts were not adequately performed, documentation to verify operating materials and supplies valuation was not available, and certain spare parts were not included in the reported total. For example, FAA did not include one category of spare parts, which includes disk drives, modems, and card assemblies, estimated at $245 million in the financial statement because the field spare parts inventory records were unreliable. The reported $5.5 billion for property and equipment was unreliable because FAA records for such assets contained significant errors and omissions and did not accurately reflect property and equipment owned by FAA. For example, $198 million of property that no longer existed, such as fuel storage tanks and buildings, was included in the financial statement as assets. The reported $3.3 billion of work-in-process could not be verified because FAA did not maintain sufficient details to support what was in the account, and the total was not completely reconciled to other FAA records. Our objectives were to analyze the IG audit report on FAA’s fiscal year 1996 Statement of Financial Position and to consider the possible program and budgetary effects of reported financial statement data deficiencies. To fulfill our objectives, we analyzed the IG’s report on the audit of the FAA fiscal year 1996 Statement of Financial Position and reviewed selected IG workpapers related to the audit. We focused on the areas of operating materials and supplies and property and equipment because these were the areas cited in the IG’s report as causing the disclaimer of opinion. We also interviewed IG personnel to obtain more details about the issues raised in the report and to gain an understanding of the work performed and its results. In addition, we accessed historical IG program reports for the last 10 years and reviewed financial statement audits of FAA for the fiscal years 1993 through 1995 statements. We also obtained and reviewed information from the FAA Chief Financial Officer and other FAA personnel about the current status of corrective actions on the reported issues. In addition, because many of the problems identified by the IG were the result of the lack of a reliable system to accumulate costs, we reviewed several reports concerning FAA’s financial and cost accounting systems. These included the Department of Transportation’s Federal Managers’ Financial Integrity Act reports for fiscal years 1993 through 1996, an April 1996 consultant report by Arthur Andersen on FAA’s cost accounting system problems and needs, and a December 1997 report issued by the National Civil Aviation Review Commission that provides insights into and recommends improvements to FAA’s cost accounting system. We performed our review from October 1997 through January 1998 in accordance with generally accepted government auditing standards. We requested oral comments on a draft of this report from the Secretary of Transportation or his designee. On February 3, 1998, the FAA Associate Administrator for Administration and his staff provided us with oral comments. The IG was unable to determine whether operating materials and supplies with a reported value of $432 million were fairly stated because adequate inventory counts were not performed to determine actual items on hand,excess inventory was not identified, documentation was not available to verify the correct cost of items, and accurate detailed records were not maintained for spare parts kept in the field (field spares). Operating materials and supplies consist of spare parts located at the Logistics Center and in the field for ATC and other equipment, FAA facilities, and aircraft. The Logistics Center is the central warehouse for operating materials and supplies and uses an automated inventory system, which is continually updated (perpetual inventory) to account for inventory. Field spares are parts that to meet operational needs, are maintained at locations near the facility that they support. FAA facilities responsible for field spares generally maintain their own manual or automated inventory lists. The IG was unable to verify the reported balance of operating materials and supplies stocked at the Logistics Center because of numerous errors and omissions. Further, because FAA concluded that its records for field spares were not reliable and field spares were expensed when issued, no amount for field spares was included in the reported operating materials and supplies asset total. Available FAA records showed a balance of $245 million for field spares. Some of the IG’s specific findings were that 20 percent of the Logistics Center inventory counts did not agree with the amounts on perpetual inventory system listings, 27 percent of the field spare line items that were test counted did not match lists of stocked items, 48 percent of the Logistics Center parts did not have invoices or other documentation to verify the unit price of items, and 106 disk drives recorded at $3.6 million were kept in stock related to a system that was being decommissioned, some of which potentially could be identified as excess inventory. The lack of accurate inventory information may result in program officials’ inability to make prudent business decisions and to safeguard assets adequately, as shown in the following examples. Because of inaccurate inventory information, funding requests may not be based on actual needs, unnecessary purchases may be made, and inventory may be overstocked or hoarded due to availability concerns. In turn, this resulting excess, as well as spare parts for equipment no longer in service, would require storage, inventory control, and other activities that consume operating resources. Spare parts, such as disk drives, modems, and circuit card assemblies, may not be adequately safeguarded. Since many of these items are portable, inaccurate inventory records may increase the risk of undetected theft or loss due to unauthorized acquisition and use or disposition. The lack of accurate inventory information may also impair operational effectiveness, as shown in the following examples. Inaccurate inventory information may result in a shortage of or the inability to locate essential parts necessary to repair mission-critical systems. This could result in repair delays due to unscheduled outages and failures of FAA equipment. Inaccurate information about the location and quantities of spare parts may cause a failure to make necessary modifications or updates to these items. Complex systems may require modifications to reflect functional design changes and to eliminate failure-prone parts. Since these parts could be used to repair operational systems, it is important that these parts, especially circuit card assemblies, subassemblies, and similar items, receive required modifications to ensure system design integrity. Finally, the lack of accurate inventory information affects the reliability of financial management information. Operating materials and supplies inventory is recorded as an asset until it is issued or consumed in operations when the cost is charged to operating expenses. Therefore, if inventory assets are understated, operating expenses would be overstated. The misstatement of operating expenses distorts historical maintenance cost amounts that may be used to project and budget for future costs. FAA advised us that it has performed a wall-to-wall inventory of operating materials and supplies at the central warehouse, identified excess items, and adjusted records accordingly; revised policies and procedures to include an inventory count of operating materials and supplies every 3 years; counted 48 percent (dollar value) of field spares and plans to complete the field spares inventory by the end of fiscal year 1998; initiated a new project to provide physical and fiscal management of FAA assets both centralized at the Logistics Center and in field facilities; begun recording field spares as assets rather than expensing them as was revised methods to ensure timely review of excess operating materials and supplies stored at the Logistics Center; and made plans to implement bar coding for the asset tracking process in July 1998. The IG was unable to determine whether $5.5 billion of property and equipment was correct because records for these assets contained significant errors and omissions, supporting documentation was often unavailable, and inventory counts had not been performed to determine actual items on hand. The $5.5 billion includes reported amounts of almost $2 billion in property (real estate assets), and $3.5 billion in equipment (also referred to as personal property). Issues identified by the IG related to each of these categories of assets are discussed in the following sections. FAA’s system for keeping track of property, such as land, facilities, and lease improvements, had significant errors and omissions and could not be relied on to determine the actual amount of property owned. The IG found the following. Property estimated at $198 million, which had been disposed of, destroyed, or physically removed was still included in the property system. For example, at the Air Route Traffic Control Center in Miami, a large fuel storage tank that had been removed years ago was still on the property list and at another control center, several buildings on the property list had been demolished. Limited tests of real property identified about $12 million in assets held by FAA that could not be located in the property system. For example, at the Cleveland Air Route Traffic Control Center, a medical trailer facility was not recorded and at another control center, FAA failed to record a day care center completed in 1994. The IG could not determine if 22 of 65 leases tested should have been recorded as long-term capital leases because documentation available was not sufficient to properly classify the leases. If these leases, with about $4 million in annual payments, were improperly classified as short-term instead of long-term, assets and liabilities would have been understated. These types of errors and omissions affect FAA’s ability to manage its real estate assets and make decisions about future needs. For example, long-range planning needs for future facilities are impaired by the lack of accurate information on the cost and useful lives of existing facilities. Further, lack of records on revenue-producing properties results in the inability to analyze the adequacy of fees charged to recover costs. As with real property, FAA’s system for tracking equipment, such as ATC equipment and aircraft, could not be relied on because of a lack of supporting documentation and numerous errors. The IG found that: At various regions, equipment records did not separately identify individual assets. For example, at one region, three computer workstations valued at $162,000 were lumped together and could not be individually identified or located. FAA officials subsequently determined that the workstations were replaced in 1994. Documentation did not always exist to support reported equipment costs. For example, at one location, documentation to support the cost of mission-critical equipment, such as the Backup Emergency Communications system and the Voice Switching Control system, could not be located. Equipment costs totaling $325 million (out of $473 million tested by the IG) were improperly reported as operating expenses instead of assets. Based on the results of this sample test, the IG indicated that for fiscal years 1995 and 1996, an undetermined portion of the estimated $4.5 billion in major system equipment acquisitions using facilities and equipment funding was inappropriately charged as operating expenses rather than recorded as assets. The unreliability of the equipment information system affects FAA’s ability to properly manage these assets, thus giving rise to potential operational inefficiencies. For example, mission-critical equipment, such as radars and other air traffic control equipment, may be difficult to locate when needed, which could exacerbate an emergency situation. Also, as with inventory, asset theft could go undetected, and funds could be spent unnecessarily to acquire equipment that is already on hand. Problems in accounting for both property and equipment also affect FAA’s ability to properly maintain these assets, including estimating future maintenance and deferred maintenance funding needs. Effective in fiscal year 1998, FAA will be required under federal accounting standards to estimate deferred maintenance costs. Such estimates will provide information to FAA and the Congress that will be useful in making decisions about maintenance priorities and future funding. However, without accurate property records, FAA and the Congress may not have sufficient information to make reliable estimates of maintenance and deferred maintenance needs. FAA advised us that for property it has counted 70 percent (dollar value) of real property and plans to count the remaining property by July 31, 1998; has made adjustments to detailed property and financial records based on property validations (computer match of database) taken; plans to develop procedures by June 30, 1998, to (1) ensure detailed property records are adjusted when property is acquired, disposed of, or destroyed and (2) reconciled property records to the general ledger; and has published guidelines for identifying capital leases and has directed a complete evaluation of leases. FAA advised us that for equipment it has validated (computer match of database) 100 percent of equipment records greater than $25,000; has developed a system modification to record equipment acquisitions as individual items rather than as an aggregate amount and has issued written guidance about managing equipment effectively; is revising procedures and plans to train personnel in 1998 to properly identify equipment purchase costs that should be recorded as assets; and has worked with the IG to develop an approach for determining the value of fiscal years 1996 and 1997 transactions (individual items) that should be recognized as assets. Many of the problems the IG identified in operating materials and supplies and property and equipment result from the lack of a reliable system for accumulating project cost accounting information. When FAA acquires facilities and equipment, some project costs are accumulated in an account called work-in-process. The work-in-process account is a key component of FAA’s system used to account for project costs. When the acquired items are placed in service, the accumulated costs are to be removed from work-in-process and recorded in appropriate asset or expense accounts. However, the IG was unable to determine whether the $3.3 billion reported as work-in-process was correct because FAA did not maintain sufficient details to support what was in the account, and the total was not completely reconciled to other FAA records. Without a reliable system to accumulate project costs, and to transfer out the appropriate amount when assets are placed in service, the asset and expense accounts relating to operating materials and supplies and property and equipment will continue to be misstated. The inadequacy of FAA’s cost accounting system has been identified by GAO and others as a weakness that prevents FAA from reliably determining project and other costs. For example, with regard to the ATC modernization program, we previously reported that FAA does not have a cost accounting system capable of reliably accumulating full project cost information. Our report concluded that without a system to capture and report the full cost of ATC projects, FAA cannot reliably measure the ATC projects’ actual cost performance against established baselines, and cannot reliably use information relating to actual cost experiences to improve future cost estimating efforts. Further, we reported that the Congress does not have reliable cost information to use in making funding decisions about FAA. In April 1996, an FAA consultant reported on FAA’s cost accounting needs and options. Among other conclusions, the consultant stated that “none of the (existing FAA) systems evaluated can provide cost data to support management needs. . . .” Others have discussed some of these needs. For example, in December 1997, the National Civil Aviation Review Commission reported that “only with this effective management tool (cost accounting system) can a substantial improvement in cost accuracy and service be obtained by FAA.” Among the Commission findings was that “odern business tools, such as a cost accounting system, that tie specific costs to services, and measurement tools that assess how well services are provided are not yet available.” Among the recommendations made by the Commission were that FAA’s revenue must be based on the cost of services provided. “Using such a (cost-based) system, in and of itself, will bring about a very significant management improvement. The questions that could be answered in a cost-based environment cannot be answered today.” The lack of reliable information about the costs of program activities limits the ability of FAA management and other decisionmakers to estimate future costs in preparing and reviewing budgets, to control and reduce costs, and to identify and avoid waste. For example, without reliable cost information, FAA and other decisionmakers may not be able to effectively compare, during the budgeting process, expected costs with expected benefits, identify activities that add value, and make informed decisions about whether to expend resources for activities that are not cost-effective; compare and identify the causes of cost changes over time; identify and reduce excess capacity costs (the cost to maintain a level of service that may not be needed), if any; choose among alternative actions such as whether to perform a project in-house or contract it out, to accept or reject a proposal, or to continue or eliminate a product or service; and compare costs of similar activities and find causes for cost differences, if any. The lack of reliable cost information about program activities also limits the ability of FAA management and other decisionmakers to establish fees for services based on the cost of the services provided. For example, the Federal Aviation Reauthorization Act of 1996 (Public Law 104-264) directed FAA to establish user fees not to exceed $100 million for selected services, including aircraft overflights, and to directly relate these fees to the costs of providing the service rendered. Finally, the lack of reliable cost information limits the ability of FAA management and other decisionmakers to meaningfully evaluate performance measures. Measuring costs is an integral part of measuring performance in terms of efficiency and cost-effectiveness. Efficiency is measured by relating inputs to outputs and is often expressed by the cost per unit of output. Effectiveness is measured by the outcome, or the degree to which a predetermined objective is met, and is commonly combined with cost information to show “cost-effectiveness.” However, these measures are meaningless if they are not based on reliable underlying information. With regard to accounting for costs, FAA advised us that it created a Cost Accounting Division and, at the end of fiscal year 1997, established a baseline cost accounting system for selected pilot organizations that it plans to implement agencywide by the end of fiscal year 1998; by February 28, 1998, it plans to have policies and procedures developed for classifying amounts and managing the amounts recorded as work-in-process; it plans to complete development of a detailed transaction database to support the amounts in the financial records and financial statements; and its new cost accounting system will respond to recommendations made by the National Civil Aviation Review Commission. Accountability over physical assets is a key step in avoiding waste, fraud, and abuse, and is essential to efficient and effective budgeting and management of resources. It is particularly critical in situations such as FAA’s, in which billions of dollars of assets are being acquired in connection with the ATC modernization program. Until FAA implements effective policies and procedures to provide accountability over operating materials and supplies and property and equipment, it remains vulnerable to significant mismanagement of appropriated funds used to acquire these assets. FAA officials generally concurred with our findings and conclusions. They emphasized, however, that they have taken significant actions to address the problems identified in the IG’s audit report on the fiscal year 1996 financial statement. We have added throughout our report a discussion of actions FAA stated it has taken since the date the IG audit report was issued. Because neither we nor the IG has yet determined the effectiveness of these actions, it is not clear whether they are sufficient to address FAA’s accounting and financial management deficiencies. FAA also provided some clarifying comments that we incorporated into our report where appropriate. We are sending copies of this letter to the Ranking Minority Member of your Committee, the Secretary of Transportation, the Administrator of the Federal Aviation Administration, the Acting Chief Financial Officer of the Federal Aviation Administration, the Director of the Office of Management and Budget, the Department of Transportation Inspector General, and other interested parties. Copies will also be made available to others upon request. If you have any questions about this letter, please call me at (202) 512-8341 or John C. Fretwell, Assistant Director, at (202) 512-9382. John C. Fretwell, Assistant Director Frank S. Synowiec, Jr., Assistant Director H. Donald Campbell, Senior Auditor Mary B. Merrill, Senior Auditor Meg Mills, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Transportation (DOT) Inspector General's (IG) audit report on the Federal Aviation Administration's (FAA) fiscal year 1996 Statement of Financial Position, which reports FAA's assets and liabilities. GAO noted that: (1) the deficiencies concerning operating materials and supplies and property and equipment cited by the IG impair FAA's ability to efficiently and effectively manage programs that use these assets and expose the agency to waste, fraud, and abuse; (2) in addition, while not a specific focus of the IG report, GAO and others have identified the lack of a reliable cost accounting system as a weakness that prevents FAA from reliably determining costs; (3) the lack of cost accounting information impairs FAA's ability to make effective decisions about resource needs, to adequately control major projects such as the air traffic control (ATC) modernization program, and to identify and avoid waste; (4) for example, without good cost information FAA cannot reliably measure the ATC modernization program's actual cost performance against established baselines, and cannot reliably use information relating to actual cost experiences to improve future cost estimating efforts; (5) the lack of cost accounting information also limits the ability to meaningfully evaluate performance measures in terms of efficiency and cost-effectiveness; (6) the lack of reliable cost information also limits FAA's ability to meaningfully evaluate performance measures in terms of efficiency and cost-effectiveness; (7) overall, the lack of accountability over physical assets means that FAA and Congress may not have accurate financial management information to help make informed decisions about future funding; (8) the lack of accountability is of particular concern in situations such as FAA's where billions of dollars of assets are being acquired in connection with the ATC modernization program; (9) FAA advised GAO that since the IG report was issued on March 27, 1997, it has made significant progress and expended significant resources toward correcting the problems reported by the IG; (10) according to FAA, it has taken or plans corrective actions in three principal areas: (a) operating materials and supplies; (b) property and equipment; and (c) cost systems; (11) FAA informed GAO that it has counted a major portion of operating materials and supplies and property, identified excess items, adjusted its records, and created a Cost Accounting Division; and (12) GAO has not assessed the current status or sufficiency of these actions.
Among its findings, our report notes that inefficiency in the Forest Service’s decision-making process can result when (1) the agency identifies issues but then conducts continual and/or multiple studies to address them without establishing any clear sequence for their timely resolution; (2) stakeholders, both inside and outside the agency, cannot agree on how the Forest Service is to resolve conflicts among competing uses on its lands and needed improvements are delayed; and (3) the Forest Service and federal regulatory agencies cannot agree on an acceptable level of risk to endangered and threatened species, water, air, and other individual natural resources. The Forest Service’s process for revising the Tongass forest plan illustrates how each of these factors affects the efficiency of the agency’s decision-making. On the Tongass, as elsewhere, the Forest Service tends to study and restudy issues without reaching closure. For example, a scoping process begun in 1987 identified wildlife and fish habitats as two issues needing special attention in revising the Tongass plan. The Forest Service team revising the plan established a committee—the “viable population” committee—to study the viability of various old-growth-dependent species. In 1992, this committee produced a draft strategy for preserving wildlife, which was reviewed twice—first by a wildlife ecologist from the Forest Service’s Pacific Northwest Research Station (a research arm of the agency) and later in a report by the research station, which contained 18 individual scientific reviews and a legal review. Also in 1992, the Forest Service team revising the plan performed its own study of the viability of wildlife and fish. This study, which included an examination of the viable population committee’s strategy, was also reviewed by the research station. In 1994, a new regional forester expanded the team revising the plan by adding research scientists from the research station and tasked them with gathering information on five issues, including wildlife viability. The agency then convened six panels of experts and scientists to assess the risk each of the nine alternatives presented in a third draft of a revised Tongass plan could pose to particular species of wildlife. Three more panels were convened to assess the potential risks posed by these alternatives to terrestrial mammals, fish and riparian areas, and old-growth forests. In March 1997, the Forest Service reconvened the panels to assess (1) the alternatives, some of which had been modified since the third draft was released for public comment in April 1996, and (2) the potential risks to certain species of fish and wildlife posed by a new preferred alternative. Today, the issue of wildlife viability has still not been resolved. The Forest Service also has had difficulty reconciling its older emphasis on producing timber with its more recent emphasis on sustaining wildlife and fish under its broad multiple-use and sustained-yield mandate. Resolving disagreements over this issue within the agency delayed the Tongass forest plan’s revision. Our report shows that during the last 10 years, the Forest Service has increasingly shifted the emphasis under its broad multiple-use and sustained-yield mandate from consumption (primarily producing timber) to conservation (primarily sustaining wildlife and fish). This shift is taking place in reaction to requirements in planning and environmental laws and their judicial interpretations—reflecting changing public values and concerns—together with social, ecological, and other factors. The increasing emphasis on sustaining wildlife and fish sometimes conflicts with the agency’s older emphasis on producing timber and underlies the Forest Service’s inability to achieve the goals and objectives for timber production in many of the first forest plans, including the 1979 Tongass plan. When the Forest Service began to revise the Tongass plan in 1987, it was just beginning, as an agency, to shift its emphasis from producing timber to sustaining wildlife and fish. This shift has not been smooth and has contributed significantly to the delays and costs incurred in revising the plan. For example, 3 years after the Forest Service began to revise the Tongass forest plan, the Congress enacted the Tongass Timber Reform Act of 1990. Among its provisions, the act (1) eliminated a special funding provision in a 1980 act (the Alaska National Interests Lands Conservation Act) intended to maintain the timber supply from the Tongass to the dependent industry; (2) directed the agency to maintain buffers of standing timber between designated streams and timber harvest areas to protect fish and wildlife habitat, such as spawning ground for salmon; (3) designated additional wilderness areas within the forest; and (4) designated 12 additional special management areas in which harvesting timber and building roads are generally prohibited. The 1990 act also unilaterally made nine modifications to long-term timber sale contracts held by two companies—the Alaska Pulp Corporation and the Ketchikan Pulp Company—that harvested large amounts of timber in the forest. Among other things, the act modified the contracts to eliminate disproportionately high harvests of old-growth timber. Other events reflecting the Forest Service’s increasing emphasis on sustaining wildlife and fish also delayed the agency’s revision of the Tongass forest plan. For example, in a 1988 decision on an appeal of the approved forest plan for the Flathead National Forest in northwestern Montana, the Associate Chief of the Forest Service directed the regional forester to leave 10 percent of certain watersheds in old-growth areas large enough to provide habitat for certain species until the regional forester completed additional analyses of these species’ habitat requirements. In 1990, an interagency scientific committee—established to develop a strategy for conserving the northern spotted owl in the Pacific Northwest—also advocated the retention of large blocks of old-growth forests to ensure the viability of populations of old-growth-dependent species. Finally, in 1992, the Chief of the Forest Service announced plans to reduce the amount of timber harvested by clearcutting by as much as 70 percent from fiscal year 1988 levels. Forest Service officials revising the Tongass forest plan believed that this new information and these events could have a significant impact on managing a forest that, up until then, had relied primarily on even-age management (clearcutting). These officials therefore believed that the new information and events needed to be considered in finalizing the revised forest plan. By this time, the process to revise the Tongass forest plan had entered its fifth year. The Forest Service’s response to this new information and these events was slowed, however, by internal disagreements concerning which use—producing timber or sustaining wildlife and fish—should be emphasized and how the forest should resolve conflicts or make choices between these competing uses on its lands. For example, the Forest Service team revising the forest plan disagreed with the viable population committee’s proposed strategy for preserving certain species of wildlife on the forest. The committee’s proposed strategy would have given more emphasis to sustaining wildlife than the team’s preferred alternative. In our view, this disagreement permeated other decision-making levels as well, extending to the forest supervisors and regional foresters. The friction on the Tongass over mission priorities is characteristic of an agency in transition and mirrors conflicts within the Forest Service as a whole—some Forest Service personnel support the agency’s shift in emphasis while others continue to believe that timber should receive the same priority it did in the past. Our report on the Forest Service’s decision-making process states that interagency disagreements have delayed forest plans and projects. Disagreements between the Forest Service and federal regulatory agencies—including Interior’s Fish and Wildlife Service, Commerce’s National Marine Fisheries Service, and the Environmental Protection Agency (EPA)—over the best approaches to achieving environmental objectives and implementing laws and regulations often stem from the agencies’ differing evaluations of environmental effects and risks, which in turn reflect the agencies’ disparate missions and responsibilities. We found that such disagreements had delayed planning for the Tongass. The Forest Service’s April 1996 draft plan and preferred alternative represent the intermediate results of almost 9 years of planning. Not only the preferred alternative for managing the Tongass, selected by the forest’s three supervisors, but also the majority of the other nine alternatives presented in the April 1996 draft plan would increase the forest’s emphasis on sustaining wildlife and fish and decrease the annual timber-offering goal, compared with the current plan. According to the forest supervisors, the preferred alternative is consistent with the Forest Service’s broad multiple-use and sustained-yield mandate. However, according to the federal regulatory agencies that are charged with implementing and enforcing environmental laws and regulations—including those to conserve and protect individual natural resources, such as endangered and threatened species, water, and air—the preferred alternative poses a high level of risk to wildlife and their habitat. Even though the Forest Service established an interagency policy group in mid-1994, which included program managers from the three regulatory agencies, to advise the team revising the Tongass forest plan, all three regulatory agencies criticized the April 1996 preferred alternative and suggested changes to reduce the level of risk to wildlife and their habitat. In particular, the Fish and Wildlife Service was concerned about the preferred alternative’s guidelines for habitat management as they apply to old-growth-dependent species on the Tongass, including two species that have been proposed for listing under the Endangered Species Act (the Alexander Archipelago wolf and the Queen Charlotte goshawk). If these species are listed after a revised forest plan is approved, the Forest Service could be required to reinitiate formal consultations with the Fish and Wildlife Service to again amend or revise the plan. This interagency disagreement has further delayed the approval of a revised Tongass forest plan. In the end, the Forest Service hopes to approve a revised Tongass plan that is legally defensible, scientifically credible, and able to sustain the forest’s resources. However, as its experience in revising the Tongass forest plan has shown, developing a forest plan to avoid or prevail against legal challenges has become increasingly costly and time-consuming. On the Tongass, insufficient data and scientific uncertainty have hampered the development of a plan that can ensure the maintenance of viable populations of wildlife. As an option to move beyond inclusive studies, the Forest Service may be able to move forward with a decision conditioned on an adequate monitoring component. However, the Forest Service has historically failed to live up to its own monitoring requirements, and federal regulatory agencies and other stakeholders continue to insist that the Forest Service front-load the process. This preparation of increasingly time-consuming and costly detailed environmental analyses and documentation before making a decision has helped perpetuate the cycle of inefficiency. In a March 10, 1997, letter to you, Mr. Chairman, the Secretary of Agriculture stated that the Forest Service is completing a final legal review of its most recent preferred alternative to revising the Tongass plan to ensure that it is legally defensible. In our report, we state that, according to the Forest Service, it spends more than $250 million a year conducting extensive, complex environmental analyses and preparing environmental documents in order to comply with the requirements of the National Environmental Policy Act and other environmental laws and to avoid or prevail against challenges to its compliance with these laws. In 1995, the Forest Service reported that it prepared about 20,000 environmental documents annually—more than any other federal agency. In 1994 (the last year for which data are available) the agency issued almost 20 percent of all the final environmental impact statements prepared by federal agencies (50 out of a total of 253). According to an internal Forest Service report, conducting environmental analyses and preparing environmental documents consumes about 18 percent of the funds available to manage the national forests and approximately 30 percent of the agency’s field resources. Preparing timber sales on the basis of an approved forest plan usually takes another 3 to 8 years. In March 1989, the Forest Service initiated a comprehensive review of its land management process and completed a critique in May 1990. On the basis of the critique, the agency proposed revisions to its planning regulations in April 1995. These revisions were intended to, among other things, clarify the nature of forest plan decisions and define the appropriate scope of environmental analyses. After 2 years, the Forest Service has still not finalized these revisions. In his March 10th letter to you, the Secretary of Agriculture also stated that the Forest Service is completing a final substantive review of its most recent preferred alternative to revising the Tongass plan to ensure that it is scientifically credible and will sustain the resources of the forest. Toward this end, the Forest Service has devoted substantial resources and time to ensure that the revised forest plan meets a requirement in its regulations relating to maintaining the diversity of animal communities. However, the Forest Service has asserted that this requirement, if interpreted literally, envisions an outcome that is sometimes impossible to be guaranteed by any agency, regardless of the analytical resources marshalled. The Forest Service’s biological diversity requirement for fish and wildlife habitat—found in its regulations implementing the National Forest Management Act of 1976—requires the agency to maintain well-distributed viable populations of existing native and desired non-native vertebrate species in the planning area. However, in the revisions proposed to its planning regulations in April 1995, the Forest Service states that the scientific expertise, data, and technology currently needed to conduct the required assessments of species’ viability far exceed the resources envisioned by the agency when the planning rule was developed, as well as the resources available to any agency or scientific institution. Therefore, according to the Forest Service, the viable populations requirement no longer meets its expectations. The proposed revisions include an option for sustaining diversity preferred by the Forest Service. This option would protect the habitats of most species and use the Endangered Species Act as a “fine filter” to catch and support the special needs of species that otherwise would go unmet. However, since the Forest Service has not finalized the proposed revisions to its planning regulations, the revised Tongass forest plan must satisfy a requirement that the agency asserts is sometimes impossible to meet. An option to avoid the growing delays and increasing costs incurred in attempting to ensure that a decision is scientifically credible and legally defensible may be for the Forest Service to move forward with a decision using the best information available. According to an interagency task force chaired by the Council on Environmental Quality, an agency can condition a decision—the effects of which may be difficult to determine in advance because of uncertainty or costs—on the monitoring of uncertainties, indicate how the decision will be modified when new information is uncovered or when preexisting monitoring thresholds are crossed, and reexamine the decision in light of its results or when a threshold is crossed. However, the Forest Service (1) has historically given a low priority to monitoring, (2) continues to approve projects without an adequate monitoring component, and (3) has not generally monitored the implementation of forest plans as required by its current regulations. As a result, federal regulatory agencies and other stakeholders will likely continue to insist that the Forest Service prepare detailed environmental analyses and documentation—which have become increasingly costly and time-consuming—before making decisions rather than support what many Forest Service officials believe to be the more efficient and effective option of monitoring and evaluation. Both the Fish and Wildlife Service and EPA have already expressed reservations about the adequacy of the monitoring component in the Forest Service’s April 1996 draft Tongass plan. In commenting on the draft plan, the Fish and Wildlife Service stated that the proposed standards and guidelines are too vague and will not provide for the intended accountability because compliance will be difficult or impossible to measure. EPA commented that the plan did not provide sufficient information to clearly indicate how monitoring would be integrated into the management strategy. Inefficiencies within the Forest Service’s decision-making process on the Tongass and on other national forests lead to the inevitable question—why? Why does an agency study and restudy issues without reaching closure? Why does this same agency attempt to do what it says sometimes cannot be done regardless of the time and money invested? And why does it spend a significant portion of its limited resources on conducting environmental analyses and preparing environmental documents rather than on the apparently more efficient and effective option of monitoring the environmental effects of its decisions? Although the Forest Service is held accountable for developing forest plans that may be scientifically credible and legally defensible, it is not held accountable for developing them in a timely, orderly, and cost-effective manner. The agency itself pays for the time and costs associated with legal challenges to the scientific credibility and legal defensibility of its decisions, but others bear the costs of its indecision and delays. The American taxpayer bears the financial costs, while the costs associated with the uncertainty of not having an approved forest plan are borne by members of the public who are concerned about maintaining biological diversity but are precluded from forming reasonable expectations about the forest’s health over time as well as those who are economically dependent on the Tongass but are precluded from forming reasonable expectations about the future availability of the forest’s uses. Although the Forest Service has been shifting its emphasis from consumption to conservation on the Tongass as well as nationwide, the Tongass continues to play an important role in the economy of southeastern Alaska, and the Forest Service retains a responsibility under its multiple-use and sustained-yield mandate to manage the Tongass for other uses, including timber. While one long-term contract was terminated and the remaining long-term contract was recently modified to terminate no later than October 2000, the agency has sold, and will continue to sell, timber from the forest to other companies. Moreover, according to the Forest Service, many communities in southeastern Alaska also depend on the Tongass to provide natural resources for uses such as fishing, recreation, tourism, mining, and customary and traditional subsistence. However, without an approved revised plan, the communities and companies that are economically dependent on the Tongass for goods and services cannot form the reasonable expectations about the future availability of forest uses that they need to plan or to develop long-range investment strategies. Mr. Chairman, the inefficiency that is occurring in the process to revise the Tongass plan is occurring at every decision-making level within the Forest Service. An internal Forest Service report estimates that inefficiencies within the agency’s decision-making process cost up to $100 million a year at the project level alone. Delays in finalizing forest plans, coupled with delays in finalizing agencywide regulations and reaching individual project decisions, can total a decade or longer. Our report identifies a framework for breaking the existing cycle of inefficiency by improving the Forest Service’s decision-making. We identify the need to provide the agency with clearer guidance on (1) which uses it should emphasize under its broad multiple-use and sustained-yield mandate and how it is to resolve conflicts or make choices among competing uses on its lands and (2) how to resolve environmental issues that transcend its administrative boundaries and jurisdiction. Our report also identifies the need for a systematic and comprehensive analysis of the laws affecting the Forest Service’s decision-making to adequately address the differences in their requirements. We believe that the Government Performance and Results Act of 1993, if implemented successfully, provides a framework for addressing many of these issues and will strengthen accountability for performance and results within the Forest Service and improve the efficiency and effectiveness of its decision-making. In addition, our report identifies the need to hold the Forest Service more accountable for its performance. In the near future, the Forest Service is required by the Government Performance and Results Act to consult with you and to consider your views in developing a strategic plan. According to the agency, one of the long-term strategic goals that it will discuss is ensuring organizational effectiveness. On the basis of our report and hearings held during the 104th and 105th Congresses, including the one held here today, we believe that you should expect to see (1) performance goals and measures based on improving the efficiency and effectiveness of the agency’s decision-making process and (2) individual performance management, career development programs, and pay and promotion standards tied to this strategic goal. When accountability for the efficiency and effectiveness of decision-making is fixed, performance and results should be improved. We believe that you should expect to see schedules for implementing improvements to the decision-making process, including one to finalize the proposed revisions to the agency’s planning regulations, as well as a plan to closely monitor progress and periodically report on performance—both of which are needed to break the cycle of studying and restudying issues without timely resolution. Forest Service managers should then seek out best practices that could enhance efficiency and effectiveness. In particular, they should begin to monitor the effects of their decisions, as they are currently required to do. Federal regulatory agencies may then be more willing to accept a higher level of risk to wildlife and their habitat in forest plans then they are willing to do now. In summary, Mr. Chairman, forest planning is, by its very nature, a complex and difficult process involving a multitude of resources, statutory responsibilities, and stakeholders. Moreover, solutions to some issues that affect the efficiency and effectiveness of the Forest Service’s decision-making will require the involvement of other stakeholders, including the Congress and other federal agencies. However, we have observed a cascading series of factors and issues resulting in inefficiencies within the Forest Service’s decision-making process that can be traced back to a lack of accountability for time and costs. Without being held accountable for the efficiency of its decision-making process, the Forest Service has allowed complexities and difficulties to become excuses for delays and increased costs rather than challenges that must be overcome in making timely decisions. One result has been that the agency has taken a reactive, rather than a proactive, approach to addressing these challenges. As the Forest Service’s efforts to revise the Tongass plan and its planning regulations have shown, the most likely outcomes of the Forest Service’s current decision-making process are indecision and delay. We believe that successful implementation of the Government Performance and Results Act should strengthen accountability for performance and results within the Forest Service and improve the efficiency and effectiveness of its decision-making. However, as evidenced by the agency’s efforts to revise the Tongass forest plan, sustained management attention within the Forest Service and sustained oversight by the Congress will be required to ensure the full and effective implementation of the act’s legislative mandates. Mr. Chairman, this concludes my prepared statement. We will be pleased to answer any questions that you or Members of the Committee may have. The U.S. Department of Agriculture’s (USDA) Forest Service has spent almost 10 years revising a land management plan, commonly called a forest plan, for the Tongass National Forest. During this time, the Alaska Region released three drafts of the plan for public comment—a June 1990 draft, a September 1991 supplement to the draft, and an April 1996 revision to the supplement. As of April 1997, the Forest Service had not approved a revised forest plan for the Tongass. Figure I.1 summarizes the major events in developing a revision to the Tongass forest plan. At 16.8 million acres, the Tongass is the largest forest in the United States, roughly equal in size to West Virginia (see fig. I.2). The Forest Service manages the Tongass to sustain various multiple uses, including timber, outdoor recreation, and fish and wildlife. The Forest Service’s Alaska Region, headquartered in Juneau, Alaska, is responsible for managing the forest. The Tongass is the only national forest with more than one forest supervisor. Because of its size, the Tongass is divided into three administrative areas—Chatham, Stikine, and Ketchikan—each of which has an area office headed by a forest supervisor. Also unique to the Tongass has been its use of timber contracts valid for up to 50 years. In the 1950s, the Forest Service awarded three such long-term contracts to timber companies to harvest timber in the Tongass. A fourth contract was awarded in the l960s but was cancelled before operations began. When initiated, the contracts required that each of the companies construct and operate pulp mills to provide steady employment in southeastern Alaska. The companies also used timber supplied under contracts to operate sawmills in the region. In return, the companies were to receive a guaranteed supply of timber. Federal law now generally limits the duration of timber sale contracts to 10 years or less. One of the three contracts awarded in the 1950s was completed in 1982. In April 1994, the Forest Service terminated one of the long-term timber sale contracts, asserting that the contract holder—the Alaska Pulp Corporation—had breached the contract by closing its pulp mill in Sitka. The contract holder in turn filed an action for breach of contract and unconstitutional taking of property against the Forest Service. Litigation is still pending. In February 1997, the Clinton administration reached an agreement with the company holding the remaining long-term timber sale contract to terminate the contract on December 31, 1999, with a possible extension to October 31, 2000. This agreement requires the company—the Ketchikan Pulp Company—to continue operating two sawmills in southeastern Alaska, and to clean up specified environmental damage resulting from its operations in southeastern Alaska. In exchange, the administration will supply enough timber to operate the sawmills for 3 years and will make certain cash payments to the company. Each side agreed to release existing or potential contract claims against the other arising out of the long-term contract. In addition, the company agreed to release existing or potential claims against the United States for the unconstitutional taking of property related to the long-term contract. The National Forest Management Act of 1976 (NFMA) requires the Forest Service to (1) develop a land and resource management plan for each national forest in coordination with the land and resource management planning processes of other federal agencies, states, and localities and (2) revise the plan at least every 15 years. A forest plan must sustain multiple uses on the forest and maintain diverse plant and animal communities (biological diversity). NFMA’s regulations, issued in 1979 and revised in 1982, require the Forest Service to estimate the physical, biological, social, and economic effects of each forest management alternative that the agency considers in detail in developing, amending, or revising a forest plan. Economic effects include the impact on total receipts to the federal government, direct benefits to forest users, and employment in affected areas. In accordance with the National Environmental Policy Act (NEPA), the Forest Service must prepare an environmental impact statement to accompany a forest plan. In preparing the statement, the agency is to seek and consider public comments on the potential environmental and other effects of the proposed forest plan. NEPA’s regulations require the agency to discuss the direct and indirect effects of the proposed plan’s various alternatives in the statement, including economic and social effects. NFMA requires the Forest Service to make draft plans available to the public for comment for at least 3 months prior to the plan’s adoption. NFMA’s regulations also specify roles and responsibilities for developing forest plans. The regulations state that the regional forester shall establish regional policy for forest planning and approve all forest plans in the region. The forest supervisor has overall responsibility for, among other things, preparing the forest plan. The forest supervisor also appoints and supervises an interdisciplinary team that is charged with developing the forest plan and its accompanying environmental impact statement. The team may consist of whatever combination of Forest Service staff and other federal personnel is necessary to integrate knowledge of the physical, biological, economic, and social sciences, as well as the environment, in the planning process. The Tongass was the first national forest to have an approved forest plan under NFMA. The Tongass’s 1979 forest plan designated certain areas of the forest off-limits to timber harvesting and scheduled about 1.7 million of the forest’s 5.7 million acres of commercial forest land as harvestable. This land was to support an average annual allowable sale quantity of 450 million board feet. In 1980, the Congress passed the Alaska National Interests Lands Conservation Act (ANILCA), which created 14 wilderness areas in the Tongass and designated Admiralty Island and the Misty Fiords as national monuments. Following ANILCA’s enactment, the Tongass’s commercial forest land was further reduced by about 1.7 million acres, from 5.7 million acres to about 4 million acres. ANILCA directed that at least $40 million derived from timber and other receipts be made available to the Forest Service to maintain the timber supply from the Tongass to the dependent forest products industry at a rate of 4.5 billion board feet per decade. The Forest Service amended its 1979 Tongass forest plan in 1986 to reflect ANILCA’s provisions. In 1987, the Forest Service began to revise the forest plan for the Tongass. The agency started by involving the public in a scoping process to identify issues that would need special attention by the interdisciplinary team developing the new forest plan. The team also started developing a computer database of information about the resources on the Tongass, such as the location of streams and timber stands, to provide information on the potential effects of a revised plan. Although the Forest Service’s planning regulations specifically authorize the agency to develop one plan for the entire Tongass, they do not discuss the planning process in the context of a forest that is under the jurisdiction of multiple supervisors. The organizational structure for the planning effort from 1987 to August 1994 is identified in figure I.3. The organizational structure for planning consisted of a core interdisciplinary team headed by a team leader and an assistant team leader. Team members included a wildlife biologist, a lands specialist, a recreation planner, and a timber resource specialist, among others. The team leader reported directly to the Chatham Forest Supervisor, who represented all three forest supervisors and exercised day-to-day responsibilities for the plan’s development. The Alaska Region’s Director of Ecosystem Planning and Budget offered planning advice to the interdisciplinary team leader. In addition, two groups advised the team. The first group included the Forest Service’s regional directors for timber, wildlife and fish, recreation, engineering, lands, minerals, and fish and watersheds. The second group consisted of the area planners from each of the forest’s three administrative areas. This organizational structure provided the interdisciplinary team with input from each of the three administrative areas of the forest as well as from the regional directors who are considered to be the technical experts within the Forest Service’s regional office. In June 1990, the Forest Service issued a draft forest plan for public comment. The draft’s analysis centered around 11 issues identified during scoping: scenic quality, recreation, fish habitat, wildlife habitat, subsistence, timber harvest, roads, minerals, roadless areas, local economy, and wild and scenic rivers. The draft presented seven alternatives that the Forest Service could adopt to manage the Tongass but did not include a preferred alternative. The wildlife strategy contained in the 1990 draft of the forest plan was questioned. For example, some Forest Service staff from the three Tongass administrative areas considered the approach too difficult to implement and not scientifically supportable. Moreover, the Forest Service’s approach to maintaining diverse wildlife populations was changing during this time. For example, in a 1988 decision on the appeal of the approved forest plan for the Flathead National Forest in northwestern Montana, the Associate Chief of the Forest Service directed the regional forester to leave 10 percent of certain watersheds in old-growth areas large enough to provide habitat for certain species until its regional forester completed additional analyses of species’ habitat requirements. In addition, in 1990 an interagency scientific committee released a conservation strategy for the northern spotted owl in the Pacific Northwest that advocated retaining large blocks of old-growth forests as a way of ensuring population viability. In response to concerns regarding the viability of certain old-growth dependent species on the Tongass, in October 1990 the interdisciplinary team revising the Tongass’s forest plan established a committee to study the viability of populations of various old-growth species—the “viable population” committee. This committee’s principal mission was to identify species whose viability might be impaired by some forest management activities and to develop recommendations to maintain viable populations for each such species. The committee was not part of the interdisciplinary team. Shortly after the committee was established and during the 6-month period for commenting on the draft Tongass forest plan, the Congress passed the Tongass Timber Reform Act of 1990. Among other things, this act eliminated ANILCA’s special funding provision for maintaining the timber supply from the Tongass, limited timber harvesting near certain streams, designated additional wilderness areas within the Tongass, and designated 12 additional special management areas in which harvesting timber and building roads is generally prohibited. The act also made nine modifications to the long-term timber sale contracts, including adding provisions to the contracts to prohibit the disproportionate harvest of old-growth timber. The Forest Service amended its 1979 Tongass forest plan in February 1991 to reflect the act’s requirements. To respond to the Tongass Timber Reform Act and comments received on the 1990 draft forest plan, which included questions raised about the adequacy of the wildlife viability analysis in the 1990 draft forest plan, the Forest Service decided to prepare a supplement to the draft plan. In February 1991, the viable population committee submitted a report to the leader of the interdisciplinary team containing a proposed strategy for conserving old-growth forest and specific standards for 13 species dependent on old-growth forest as habitat. As foreshadowed by the strategy of the interagency scientific committee for the Pacific Northwest, the report recommended the use of large tracts of old-growth reserves close enough together so that local wildlife populations could interact with each other. According to the report, such a system would promote the interchange of genetic material between populations and maximize the opportunity for recolonization should one of the populations suffer local extinction. The report asserted that this strategy would affect a smaller proportion of the suitable timber base than was affected by the interagency scientific committee’s strategy or even by the standards appearing in the 1990 draft forest plan. The report further indicated that the recommended standards would only “barely assure perpetuation” of certain species on the Tongass. As the interdisciplinary team prepared the supplement to the draft, it rejected the strategy recommended by the viability population committee. The supplement indicated that the interdisciplinary team rejected the committee’s habitat protection recommendations because the team considered the evidence supporting the recommendations to be insufficient. The draft plan accompanying the supplement provided (1) for timber sales to be managed so as to maintain large blocks of old-growth reserves and corridors between the blocks, where compatible with other resource objectives, and (2) for standards and guidelines to protect any species that had been identified by the Fish and Wildlife Service, the National Marine Fisheries Service, or the Forest Service as threatened, endangered, sensitive, or a candidate for any of these categories. The supplement, issued in September 1991 for public comment, presented five alternatives, including a preferred alternative. The preferred alternative was designed, in the Forest Service’s words, to “enhance the balanced use of resources of the forest and provide a public timber supply to maintain the Southeast Alaska timber industry.” The alternative proposed an average annual allowable sale quantity of 418 million board feet—down from the allowable sale quantity in the 1979 plan of 450 million board feet. During 1991 and the spring of 1992, the viable population committee continued to work on refining and developing its proposed strategy for conserving wildlife in its February 1991 report and produced a draft report for review in April 1992. At the request of an Alaska Region official, a wildlife ecologist from the Pacific Northwest Research Station—a Portland, Oregon, research arm of the Forest Service—reviewed the draft report and concluded in July 1992 that the report’s wildlife conservation strategy was sound. The ecologist urged closer cooperation between the interdisciplinary team and the viable population committee and recommended further peer review of the committee’s draft report. In December 1992, an Anchorage newspaper published an article accusing the Forest Service of covering up the information contained in the viable population committee’s draft report and of disregarding the report’s conclusions. Forest Service officials denied the accusations and asserted that the viable population committee’s report was only a draft, not yet ready for public distribution, and that not enough information was available to finalize the report. In January 1993, the Chairman of the House Committee on Natural Resources asked the Secretary of Agriculture to investigate this matter. After the 1991 supplement to the draft forest plan was released for public comment but before a preferred alternative was selected, the interdisciplinary team carried out another study of fish and wildlife viability. This study was to be included as an appendix—known as “appendix M”—to the final forest plan. Appendix M described three additional risk assessments of wildlife viability performed by the interdisciplinary team, one of which was based on the viable population committee’s strategy. The interdisciplinary team stated in appendix M that these risk assessments amounted only to hypotheses and required additional data and testing. In February 1993, the interdisciplinary team presented a draft of a final revised forest plan—including a record of decision with a preferred alternative selected by the forest supervisors—for the regional forester to sign. The regional forester did not sign the record of decision. Twenty-three conservation biologists and resource scientists sent a letter to the Vice President in March 1993, condemning the Forest Service’s treatment of its scientists and their work on the Tongass and the Clearwater National Forest in Idaho. In June 1993, the House Committee on Appropriations issued a report to accompany the Forest Service’s fiscal year 1994 appropriations bill directing the Alaska Region to (1) assist the viable population committee in completing its report and (2) seek peer review of both the completed report and appendix M. The committee completed a draft of its report in May 1993. By August 1993, the Alaska Region’s regional forester officially requested the Forest Service’s Pacific Northwest Research Station to conduct an independent peer review of these documents. In March 1994, the Pacific Northwest Research Station released its report, containing 18 individual scientific reviews, a legal review, and a summary of the reviews and recommendations. The peer review gave the viable population committee’s draft report generally “high marks,” while concluding that the strategy contained in appendix M was “not as thorough or well motivated.” The peer review indicated that appendix M needed to go further to meet the requirements of the relevant legislation. The legal review concluded that while the viable population committee’s strategy represented “an earnest, if highly cautious” attempt to properly implement the Forest Service’s regulations for ensuring wildlife viability and diversity, the proposed appendix’s strategy did “not appear to implement either the spirit or the letter of these principles.” The legal review also expressed doubt about the consistency of the Forest Service’s proposed alternative with the Tongass Timber Reform Act’s restriction on the disproportionate harvesting of old-growth timber under the long-term contracts. One of the scientific reviewers also raised doubts about the legal validity of the timber harvest plans outlined in the draft revised forest plan, because the plans appeared to be incompatible with the agency’s own proposed wildlife strategy. At the end of April 1994, the Alaska Region’s regional forester retired. In May 1994, the Chief of the Forest Service appointed a new regional forester to the Alaska Region. The new regional forester requested that the 1991 supplement to the draft forest plan be revised to take into account new scientific knowledge about wildlife viability and new initiatives within the Forest Service, among other things. The regional forester identified five issues on which the revised supplement would focus: wildlife viability because of new information available from the viable population committee and other sources; caves and karst because of the recent discovery of world-class karst in the Ketchikan area; fish and riparian management because of new information arising from an—at that time, ongoing—anadromous fish habitat study required by the Congress and because of the importance of the fishing industry to southeastern Alaska; alternatives to clearcutting because of the Chief’s June 1992 policy to reduce clearcutting in national forests by as much as 70 percent in order to manage forests in a more environmentally sensitive manner; and socioeconomic effects because of concern about how changes in managing the Tongass could affect the timber and other industries, especially in light of the then-recent shutdown of one of the region’s two pulp mills. In mid-1994, the newly appointed regional forester established a new planning team structure to revise the 1991 supplement to the draft Tongass forest plan. The restructured planning team consisted of two groups—an interagency policy group and an interdisciplinary team. Figure I.4 identifies the revised organizational structure. Managers from EPA, Fish and Marine Fisheries Service, State Group was disbanded in April 1996 and replaced by forest (research scientists) The interagency policy group was composed of Alaska Region officials, including the three forest supervisors; program managers from the U.S. Environmental Protection Agency, the Department of the Interior’s Fish and Wildlife Service, and the Department of Commerce’s National Marine Fisheries Service; and personnel from the State of Alaska. The group’s role was to advise the interdisciplinary team on the development of the revised supplement to the draft forest plan and to provide interagency coordination with other federal and State of Alaska agencies. The policy group was disbanded in April 1996 when the revised forest plan was issued for public comment. The interdisciplinary team is divided into two branches: a policy (also called management) branch and a science branch. The regional forester assigned two co-leaders to the interdisciplinary team—a deputy forest supervisor to head the team’s policy branch and a research scientist to head the science branch. The policy and science branches coordinated their efforts to develop alternatives for managing the Tongass. Under the reorganized planning team structure, research scientists were appointed to the interdisciplinary team’s science branch between the fall of 1994 and early 1995 by the Director of the Pacific Northwest Research Station with the concurrence of the regional forester. They included scientists with backgrounds in forest ecology, wildlife biology, social science, hydrology, geology, forestry, and statistics. According to Forest Service officials, scientists were appointed because of concerns about the scientific credibility of the wildlife strategy in the 1991 supplement to the draft forest plan. The research scientists gathered information primarily on the five focus issues identified by the regional forester. They (1) gathered existing scientific data pertaining to the Tongass, (2) reviewed various assumptions and strategies used in the plan, and (3) developed estimates of risks to resources that might result from various proposed management activities that were eventually included in the revised supplement to the draft environmental impact statement. In addition, they are developing a “reconciliation” report which examines the extent to which science was considered in developing the Forest Service’s new preferred alternative. In most instances, the scientists did not have the time to develop new data but, rather, relied on information already in existence. The regional forester and science branch scientists with whom we spoke told us that although the research scientists were part of the interdisciplinary team, they did not participate in developing the alternatives or selecting the preferred alternative in the revised supplement to the draft forest plan. Rather, the research scientists in the science branch were responsible for (1) gathering information on the five focus issues and forwarding it to the policy branch and (2) providing comments and views on related scientific studies and indicating the risks involved in adopting various management options. After the policy branch had developed the alternatives to be included in the revised supplement to the draft forest plan, the science branch convened 11 scientific assessment panels of experts and specialists to evaluate the risk each alternative could pose to the Tongass National Forest’s biological systems, communities, and wildlife. Each panel examined the potential effects of the nine alternatives on one of the following issues: the Alexander Archipelago wolf, the northern goshawk, the Sitka black-tailed deer, the marbled murrelet, the American marten, the brown bear, terrestrial mammals, fish/riparian areas, old-growth forests, subsistence, and socioeconomics. These panels were reconvened in 1997 to assess the alternatives, some of which had been modified since the revised supplement had been released for public comment in April 1996. Many of the policy branch’s members were from the prior interdisciplinary team. The policy branch included national forest personnel with backgrounds in fish and wildlife biology, economics, recreation planning, resource information, wildlife ecology, and timber planning. The policy branch was responsible for developing the alternatives in the revised supplement of the draft forest plan, managing the resource database, coordinating public involvement, maintaining documentation of the planning process, and calculating the impact of alternatives on the amount of timber available for harvest. In developing the alternatives, members of the policy branch considered the scientific information gathered by the science group as well as the scientists’ comments and views on the risks involved in adopting various management options. The two branches also worked together to summarize the findings of the 11 scientific assessment panels convened by the science branch and present the summary to the forest supervisors to aid them in selecting a preferred alternative for managing the forest. Alaska Region officials told us that members of the policy branch chose the various management options, such as the size of the beach fringe and extent of wild and scenic rivers, presented in each alternative. Under the planning team structure in effect from 1987 to August 1994, the Chatham forest supervisor exercised day-to-day responsibility for developing the revised Tongass forest plan and directly supervised the interdisciplinary team. However, under the new regional forester’s new planning team structure, the three forest supervisors became members of the interagency policy group whose role was to advise, rather than supervise, the interdisciplinary team in developing the revised supplement to the draft forest plan. This new role of the forest supervisors was controversial both inside and outside the Forest Service. The forest supervisors stated that they were not involved in the decision to restructure the planning team or in appointing its new members, including the research scientists. According to the supervisors, between August 1994 and September 1995, this new management structure prevented them from exercising their decision-making responsibilities under NFMA with respect to appointing and supervising the interdisciplinary team. For example, one forest supervisor told us that the supervisors did not participate in developing the alternatives or establishing the scientific assessment panels. He said that if he had been responsible for supervising the interdisciplinary team, he would not have convened the panels because of their anticipated high costs, the lack of data on which to make informed decisions, and the inadequacy of similar past efforts. According to the deputy forest supervisor assigned by the regional forester to head the interdisciplinary team’s policy branch, he tried to keep the forest supervisors informed about the interdisciplinary team’s work but generally did not ask them for direction. In addition, he told us that the deputy regional manager, rather than the forest supervisors, had been assigned responsibility for hiring, firing, and promoting Tongass planning staff between August 1994 and September 1995. The forest supervisors also believe that they were not invited to participate in some key meetings held by the interagency policy group. Other Forest Service officials note that the interagency policy group was a large, unwieldy body that made few, if any, decisions. According to the regional forester, the forest supervisors informed him of their concerns in the fall of 1995. He concluded that the communication link between the deputy forest supervisor and the forest supervisors was not working. He told us that from that point forward, the supervisors became “reengaged” in the planning process. At about this time, the supervisors began to participate in meetings held by other Forest Service members of the interagency policy group. Subsequently, the forest supervisors crafted the preferred alternative included in the April 1996 revised supplement to the draft forest plan. In April 1996, the Forest Service released the revised supplement to the draft plan for public comment. The revised supplement differed substantively from the two previous versions of the draft plan that had been issued for public comment. The revised supplement presented nine alternatives and a preferred alternative. Each alternative consisted of variations of ten components: system and number of old-growth reserves, rotation age for timber, old growth and watershed retention, method of timber harvesting, extent of preservation of karst and caves, extent of riparian protection, size of beach fringe, estuary protection, timber harvest in watersheds, and deer winter range. The three forest supervisors considered the initial nine alternatives in the revised supplement before selecting a combination of components from the alternatives to create their preferred alternative. The preferred alternative was published separately from the bound draft plan, but it was presented in the summary of the revised supplement along with the other nine alternatives and was distributed with the rest of the draft plan for comment. The preferred alternative incorporated old-growth reserves, an average 100-year rotation age for timber, a combination of harvesting methods, a two-aged timber harvest system, a combination of riparian protection options, and an annual average allowable sale quantity of 357 million board feet per year. Compared to the 1979 forest plan, the preferred alternative and the majority of the other alternatives considered increased the protection of wildlife habitat and decreased the amount of timber available for harvesting. The April 1996 revised forest plan and environmental impact statement for the Tongass placed heavy emphasis on regional socioeconomic effects. They did not, however, attempt to quantify the economic effects on local communities. For example, the revised supplement examined the effects of reduced timber harvesting on the timber, recreation, and fishing industries, both for the region and for the nation, and expressed these effects in terms of jobs and income created or lost. However, for individual communities, the revised supplement described socioeconomic effects much more generally than it did for the region as a whole. The revised supplement profiled each of southeastern Alaska’s 32 communities separately and discussed the composition of each community’s economy. However, the revised supplement did not quantify the economic impact but simply stated whether a proposed alternative would have a negative, positive, or indifferent effect on the timber, fishing, and recreation sectors of the community’s economy. Forest Service economists told us that community-level effects were not forecast as specifically as were regional economic effects because not enough information was available about the communities and about the location of future timber sales. For example, Forest Service officials told us that without knowing where a timber sale will take place and how the timber will be processed, the Forest Service cannot determine which communities will be affected by timber sales. The 1990 draft environmental impact statement and the 1991 supplement to the draft environmental impact statement also did not attempt to forecast specific effects on individual communities. In the fall of 1995, the interdisciplinary team revising the Tongass plan realized that, because of the significant media attention and public response to Tongass planning issues, the public comments received on the revised supplement to the draft forest plan would likely be too numerous for them to process effectively. After considering a few outside contractors who had experience in content analysis, the interdisciplinary team hired an in-Service “enterprise team” consisting of agency employees working outside of the Alaska Region and specializing in content analysis. The interdisciplinary team estimated that the enterprise team would be more costly to hire than an outside contractor—$160,000 for the in-Service team compared with $80,000 to $150,000 for an outside team. However, the interdisciplinary team believed that the advantages of hiring the in-Service team outweighed the higher cost. These advantages included (1) a much faster start-up time with less demand on the interdisciplinary team’s time; (2) a more thorough knowledge of national forest issues; and (3) a familiarity with forest plans, terms, and concepts. After the revised supplement to the draft plan was released for public comment, the Forest Service held open houses and hearings in southeastern Alaska’s 32 communities, met with interested groups, and discussed the proposed revised plan on local media. The revised supplement to the draft also generated public meetings and demonstrations as well as congressional hearings. In July 1996, the regional forester granted a 30-day extension (through late Aug. 1996) to the 90-day comment period after considering the public comments received to date and the interest shown by the public in extending the comment period. About 21,000 respondents submitted comments. In comparison, for the 1990 and 1991 drafts released for public comment, the Forest Service received comments from about 3,700 and 7,300 respondents, respectively. Between June 1996 and October 1996, the in-Service team analyzed the public comments. Substantive issues, concerns, and questions raised by commenters were identified by the in-Service team and given to the interdisciplinary team for consideration in developing the revision to the final plan. The in-Service team, working primarily on the Flathead National Forest, consisted of about 40 people, including a project coordinator, 2 team leaders, computer support staff, writers/coders, data entry staff, and editors. In addition, Alaska regional staff assisted the in-Service team. Prior to working on the Tongass plan, the project coordinator had performed content analyses for several projects, including NFMA regulations, national forest plans, and environmental impact statements and environmental assessments. Most of the coding staff were planners or resource specialists with the National Forest System. The project coordinator told us that because the team was not from the Tongass National Forest, the team provided an objective, third-party view of the public comments. In early October 1996, the in-Service team prepared the final draft content analysis summary displaying demographic information and specific issue-by-issue analysis in a summary of public comments. According to the content analysis done by the in-Service team, (1) the majority of the public comments concerned the level of timber harvesting that the preferred alternative allowed, (2) over half the comments supported lowering the amount of timber available for harvesting and suggested terminating or not extending the Tongass’s remaining long-term timber-harvesting contract, and (3) many of the respondents, especially southeastern Alaskans, were worried about the social and economic effects on their communities if the preferred alternative was selected. The Tongass official responsible for overseeing the work done by the in-Service content analysis team considered the team’s work to be accurate and timely, given the large database that the team had to work with and the time constraints placed on the team. The total cost for the in-Service contract was $185,000. As discussed earlier, in mid-1994 the newly appointed regional forester established a new planning team structure to revise the 1991 supplement to the draft Tongass forest plan. Under the new structure, the regulatory agencies were members of the interagency policy group established to advise the interdisciplinary team and to improve interagency coordination. Interagency coordination became increasingly important in December 1993 when the Fish and Wildlife Service received a petition to list the Alexander Archipelago wolf as threatened under the Endangered Species Act. In addition, in May 1994 the Fish and Wildlife Service received a petition to list the Queen Charlotte goshawk as endangered under the act. Both subspecies occur on the Tongass and are dependent on old-growth forest as habitat. The revised Tongass forest plan, when issued, would impact how these subspecies’ habitat is managed and so could be a determinant in the viability of the species. Besides involving the Fish and Wildlife Service in the interagency policy group, in December 1994 the Forest Service signed a memorandum of understanding with the Fish and Wildlife Service and the Alaska Department of Fish and Game to prevent the listing of species on the Tongass as endangered or threatened. The memorandum provided that the agencies should assess wildlife habitat, share information about species they manage, and meet regularly to discuss the status of species to reduce the need to list them under federal or state endangered species acts. In addition, the Forest Service’s Alaska Region also acted independently to prevent the listing of the wolf, the goshawk, and other species: In June 1994, the regional forester deferred timber harvesting in old-growth reserves that had been identified by the viable population committee as needed to maintain viable populations of old-growth-dependent species. In September 1994, the Forest Service issued for comment an environmental assessment intended to protect the wildlife habitat of such species as the goshawk and the wolf while maintaining a supply of timber for local industry. The proposed action in the environmental assessment was to provide interim management guidelines to protect the species until the revised supplement to the draft forest plan was approved. If implemented, the guidelines were intended to protect those areas identified by the viable population committee as needed to maintain viable populations of old-growth-dependent species. This action was predicted to “likely result in measurably lower timber sale offerings to independent mills,” as well as defer some timber sale offerings for the Tongass’s remaining long-term contract. In July 1995, the Congress passed an actcontaining a rider effectively prohibiting the Forest Service from implementing the management guidelines. Accordingly, the regional forester did not sign the environmental assessment or implement the guidelines. In 1995, the Fish and Wildlife Service found that listing the wolf and the goshawk under the Endangered Species Act was not warranted. Environmental plaintiffs challenged these decisions. In September 1996, as the Forest Service was reviewing public comments on the revised supplement to the draft plan and formulating an alternative intended to become the final Tongass forest plan, a federal district court remanded the Fish and Wildlife Service’s decision on the goshawk to the agency. In October 1996, the same court reached the same decision with respect to the wolf. In each case, the court ruled that the Fish and Wildlife Service’s basis for not listing the subspecies—that the revised Tongass forest plan would provide adequate protection for the species’ habitats—was not valid, since the plan had not yet been formally approved by the Forest Service. Instead, the court held that the Fish and Wildlife Service must base its decision on the current (1979, as amended) plan and the current status of the subspecies and its habitat. As a result of these court decisions, the Fish and Wildlife Service began negotiations with the Forest Service in an attempt to ensure that the final forest plan would prevent the need to list the goshawk or the wolf as endangered. The Fish and Wildlife Service has until May 31, 1997, to reach a decision on the status of these species. Despite the involvement of federal regulatory and state agencies in developing the revised supplement to the draft forest plan, the Environmental Protection Agency, the Fish and Wildlife Service, and the National Marine Fisheries Service submitted comments on the draft that criticized the preferred alternative as posing a high level of risk to wildlife and habitat. The Fish and Wildlife Service was concerned that harvesting timber on a 100-year rotation, as proposed in the preferred alternative, would prevent forests from recovering old-growth stand characteristics, resulting in the loss of viable populations of species that depend on old-growth forests for habitat. The Environmental Protection Agency and the National Marine Fisheries Service favored more expansive riparian protection than the preferred alternative provided to protect fish habitat and water quality. During the 10 years from fiscal year 1987 through fiscal year 1996, the Forest Service’s Alaska Region spent slightly over $13 million to develop the revised Tongass land management plan and environmental impact statement. Tables II.1 and II.2 show the sources of the funds used and the cost elements charged to develop the forest plan. The tables’ totals for budgeting and spending may not match because of rounding. Table II.1: Budget Line Item Categories for Funding the Development of the Tongass Forest Plan for Fiscal Years 1987-96 $0 $2,056 $1,986 $4,042 $982 $1,067 $1,553 $1,476 $1,126 $1,157 $963 $2,169 $2,285 $13,070 The Alaska National Interests Lands Conservation Act (ANILCA) of 1980 directed that at least $40 million be made available annually to support, among other things, a timber supply from the Tongass National Forest. This money went into the Tongass timber supply fund. The Tongass Timber Reform Act of 1990 repealed this ANILCA provision, and the fund ceased to exist at the end of fiscal year 1991. As table II.1 shows, during fiscal years 1987-96, $13 million was funded from numerous Forest Service accounts to develop the plan, including ecosystem management; minerals; timber management; recreation; wildlife and fish; soil, water, and air; road construction; and Tongass timber supply. Forest Service officials were unable to provide us with information on their rationale for using the various funding accounts for fiscal years 1987-94. For fiscal years 1995 and 1996, the Forest Service began budgeting most of the funding for the plan from the ecosystem management account. An Alaska Region budget officer told us that the ecosystem management account was established to finance large-scale planning efforts such as the Tongass plan. As table II.2 shows, slightly more than $13 million was spent for salaries, travel, training, space leasing/utilities, printing/publishing, computer workstation leases/computer support services, and other equipment and supplies. Over $7 million, or 54 percent of the $13 million, was spent for staff salaries. An Alaska Region budget officer told us that some Tongass planning costs incurred by the regional forester, forest supervisors, some regional office administrative personnel, and Forest Service headquarters personnel are not included in these planning costs and are not readily available. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the decisionmaking process being used by the Forest Service to revise the land management plan for the Tongass National Forest in southeastern Alaska. GAO noted that its work on the Forest Service's process for revising the Tongass forest plan showed that: (1) the Service originally planned to spend 3 years revising the plan; (2) at the end of 3 years, the agency had spent about $4 million; however, it has spent another 7 years and $9 million studying and restudying issues without establishing a clear sequence or schedule for their timely resolution, attempting to reconcile its older emphasis on producing timber with its more recent emphasis on sustaining wildlife and fish, and attempting to reach agreement with federal regulatory agencies on an acceptable level of risk to individual natural resources; (3) GAO's work identified that these factors have contributed to inefficiency in decisionmaking throughout the agency; (4) in revising the Tongass forest plan, the Service has incurred unexpected delays and high costs to better ensure that the new plan is legally defensible, scientifically credible, and able to sustain the forest's resources;(5) developing a forest plan to avoid or prevail against legal challenges has become increasingly time-consuming and costly; (6) on the Tongass, insufficient data and scientific uncertainty have hindered the development of a plan that can ensure the maintenance of viable populations of animals; (7) as an option to further study and planning without resolution, the Service may be able to move forward with a decision conditioned on an adequate monitoring component and modify the decision when new information is uncovered or when preexisting monitoring thresholds are crossed; (8) however, the Service has historically failed to live up to its own monitoring requirements and, as a result, federal regulatory agencies and other stakeholders continue to insist that the Service prepare increasingly time-consuming and costly detailed environmental analyses and documentation before making a decision, effectively front-loading the process and perpetuating the cycle of inefficiency; (9) while the agency is being held accountable for developing a plan that may be legally defensible, scientifically credible, and able to sustain the forest's resources, it is not being held accountable for making a timely, orderly, and cost-effective decision; and (10) the costs of the Service's indecision in revising the Tongass plan are being borne by the American taxpayer and by the members of the public who are concerned about maintaining the forest's diverse species but are precluded from forming reasonable expectations about the forest's health over time and/or are economically dependent on the Tongass but are uncertain about the future availability of its uses.
USDA has a broad and far-reaching mission—including improving farm economies and the nation’s nutrition, enhancing agriculture trade, and protecting the nation’s natural resource base and environment—and the department may face the prospect of litigation over its regulations and other actions. As with other federal agencies, where USDA is engaged in judicial litigation—cases brought in a court, including those that are settled—as a plaintiff or a defendant, the Department of Justice (DOJ) and USDA provides technical generally provides legal representation,and subject matter expertise and assists with the case, such as by drafting documents for DOJ to file and conducting research. The types of actions that involve USDA are varied. For example, lawsuits may involve challenges to certain agency actions—such as under provisions of the Endangered Species Act, which permits parties to file challenges to government actions affecting threatened and endangered species, or under the National Environmental Policy Act, which requires federal agencies to prepare a statement identifying the environmental effects of major actions they are proposing or ones for which third parties seek federal approval or funding and that significantly affect the environment. Cases may involve other statutes, such as title VII of the Civil Rights Act, which prohibits discrimination in employment. Additionally, the Administrative Procedure Act authorizes challenges to certain agency actions that are considered final actions, such as rulemakings and decisions on permit applications. With respect to the payment of attorney fees, in the context of judicial cases, the law generally provides for three ways that prevailing parties can be eligible for the payment of attorney fees by the federal government. First, many statutes contain provisions authorizing the award of attorney fees from a losing party to a prevailing party; many of these provisions apply to the federal government. Second, where there is a fee-shifting statute that allows for the payment of attorney fees by a losing party to a prevailing party but is not independently applicable to the federal government, EAJA provides that the government is liable for reasonable attorney fees to the same extent as a private party (i.e., claims paid under EAJA subsection (b)). Under these first two ways, when a party prevails in litigation against the government and is awarded attorney fees under court order or settlement, the amounts generally are paid from the Department of the Treasury’s (Treasury) Judgment Fund (a permanent, indefinite appropriation that pays judgments against federal agencies that are not otherwise provided for by other appropriations). Third, EAJA provides that in any civil action where there is no fee-shifting statute, prevailing parties generally shall be awarded attorney fees when the government cannot prove that its action was substantially justified (i.e., claims paid under EAJA subsection (d)).settlements are paid from the losing agency’s appropriation. In adversary administrative adjudications—generally, proceedings that are brought in a special agency forum, rather than in a court, and in which the government position is represented—a separate provision of EAJA applies. Specifically, EAJA provides that in adversary adjudications, the government is liable to a prevailing party for reasonable attorney fees when the government cannot prove that its action was substantially justified. When such fees are awarded or agreed to in a settlement, they are generally paid from the agency’s appropriated funds. In this statement, we refer to attorney fees anytime fees were paid, regardless of the source of law authorizing the payment—independently applicable statutory fee-shifting provisions, EAJA subsections (b) or (d), or EAJA’s adversarial adjudication provisions—and whether awarded by a court or administrative forum or provided in a settlement. The payment process differs, however, based on the statute involved and whether the award was made at the administrative level or through the courts, as shown in figure 1. In April 2012, we found that USDA did not report any aggregated data on attorney fee claims and payments made under EAJA and other fee- shifting statutes for fiscal years 2000 through 2010, but USDA and other key departments involved—Treasury and DOJ—maintained certain data on individual cases or payments in several internal agency databases. However, collectively, these data did not capture all claims and payments. USDA officials stated at the time that given the decentralized nature of the department and the absence of an external requirement to track or report on attorney fee information, the information was not centrally tracked and decisions about whether to track attorney fee data and the manner in which to do so were best handled at the agency level. Accordingly, for our April 2012 report, we contacted 33 agencies within USDA to obtain their available attorney fee information. In response, officials from 29 of the 33 USDA agencies told us that they did not track or could not readily provide us with this information. These officials generally stated that this is because their agencies deal with few or no attorney fee cases, the payment amounts are minimal, another agency within the department tracked this, or the agency did not need this information for internal management purposes. For example, an Acting Director in the USDA Farm Service Agency stated that because so few cases are filed against the agency, there is little value in tracking the data. We reported that the remaining 4 USDA agencies we contacted either had mechanisms to track information on attorney fees, or were able to compile this information manually using hard copy files, or directed us to publicly available sources where we could obtain the information. These four agencies were: (1) the National Appeals Division (NAD), an agency that conducts hearings of administrative appeals of adverse actions by certain USDA agencies; (2) the Office of Assistant Secretary for Civil Rights (OASCR) an agency that adjudicates employee discrimination complaints; (3) USDA’s Office of Administrative Law Judges (OALJ); and (4) the Forest Service, which is responsible for managing its lands for various purposes—including recreation, grazing, and timber harvesting— while ensuring that such activities do not impair the lands’ long-term productivity. In our April 2012 report, we identified one program agency at USDA—the Forest Service—that maintained attorney fee data. The Forest Service maintained the data in two different information sources: (1) a spreadsheet that tracked the amounts of attorney fees and costs awarded or settled, among other items, for environmental litigation, including cases filed under the National Environmental Policy Act, the National Forest Management Act, and the Endangered Species Act; and (2) a separate accounting code in the USDA financial database.discuss these two sources in further detail below. We reported that the attorney information maintained by these four agencies varied with respect to the time frame for which data were available, whether the agency had information on the amount awarded versus the amount paid, and the statutes under which the cases were brought, among other information. Further, in April 2012, we reported that given the differences in attorney fee information available across the 4 USDA agencies and the limitations identified below, it was difficult to comprehensively determine (1) the total number of claims filed for attorney fees, (2) who received payments, (3) in what amounts, and (4) under which statutes. Specifically, we found: The total number of claims filed for attorney fees could not be determined. Two USDA agencies—NAD and OALJ—that provided information on attorney fee data did not maintain data about claims for attorney fees that were filed but denied. As a result, we concluded that the number of claims filed may be understated for these agencies. Information on who received the payment was not always recorded. Payment of attorney fees may be made to one or more parties or directly to the attorney. Agencies that had information on attorney fees sometimes identified a particular party in the case, as opposed to everyone who received payments. For example, we reported that the Forest Service spreadsheet listed 241 cases with attorney fees all of which identified the first-named party in the case, but 46 cases did not identify the payee. Given that attorney fees may be paid to the first named party, to other parties in the case, or to attorneys, we concluded that the first named party may not reliably identify who actually received the attorney fee payment. Data on actual attorney fee payments made were not consistently available. We also reported that 2 of the 4 agencies— NAD and OALJ—provided information on award or settlement amounts rather than attorney fee payment amounts. Amounts awarded reflect the attorney fee award included in a decision or settlement, and amounts paid reflect the actual amount the agency paid. According to DOJ officials at the time, award or settlement amounts may differ from payment amounts because award amounts may increase because of added interest expense before payment is disbursed. Moreover, DOJ and agency officials stated that award or settlement amounts may increase or decrease as a result of subsequent legal proceedings (e.g., a prevailing party could appeal the award amount, and an appeal could change the amount the agency ultimately paid). In addition, decisions and settlement agreements may not separate attorney fees and costs from damages, a fact that prevents agencies and Treasury from knowing exactly how much was allocated for each purpose. We concluded that in these instances, the attorney fee amounts cannot be determined. Statutes under which the case was brought were not always recorded. Last, we found that the Forest Service did not track information on the statutes underlying the award or payment because the Forest Service financial database does not have a statute field, and according to the official who collected the spreadsheet data, he did not research statute information because of time constraints. However, the Forest Service official estimated that between two-thirds and three-quarters of the Forest Service natural resource cases involve challenges under the National Environmental Policy Act, the National Forest Management Act, the Endangered Species Act, or a combination of these acts. In our April 2012 report, we found that the Forest Service was the only program agency that was able to provide us with attorney fee data across the 11-year period and gathered information on attorney fees and cost awards associated with cases from three sources—Forest Service regional officials, a Forest Service-commissioned university study, and publicly available court documents. Forest Service officials maintained this information in a spreadsheet that tracked the amounts of attorney fees and costs awarded or settled, among other items, for environmental litigation, including cases filed under the National Environmental Policy Act, National Forest Management Act, and Endangered Species Act. Forest Service officials told us at the time that they undertook the effort to compile information on cases resulting in attorney fee and cost awards to provide internal guidance to Forest Service management. For example, we reported that the information on attorney fee and cost awards helped the agency make informed decisions on whether proposed fees in ongoing cases were reasonable in light of recent cases involving similar challenges. We also reported on several limitations of the data that were identified by the official who developed the spreadsheet. Specifically: The list of cases was not intended to be a definitive list of all attorney fee and cost payments and the payments should be considered in totality rather than case by case. The data include only environmental cases. Accordingly, nonenvironmental cases, such as those brought under the Freedom of Information Act (FOIA), Equal Employment Opportunity Act, and other civil rights statutes, were not included. Not all of the attorney fees and costs included in the spreadsheet were paid from Forest Service appropriations, as Treasury may have paid some of the attorney fees and costs from its Judgment Fund. In some instances, award or settlement amounts may be overstated. Specifically, court documents Forest Service officials reviewed to compile the data do not always break out award amounts to be paid by separate defendants. For example, if a party sued the Forest Service and Interior’s U.S. Fish and Wildlife Service and prevailed, both agencies might need to pay attorney fees and costs if they lost, but the court might not specify the amount each agency is to pay. In these instances, the data assumed the Forest Service paid the total amount. Using the Forest Service’s spreadsheet data, we reported that about $16.3 million in attorney fees and costs in 241 environmental cases from fiscal years 2000 through 2010 was awarded against or settled by the Forest Service. Figure 2 shows the amounts of attorney fees awarded and number of cases at the Forest Service by fiscal year. Figure 2 shows that the greatest number of cases was concluded in fiscal year 2006 (31 cases), and the awards against the Forest Service were greatest in 2007 ($2.3 million). Additionally, we reported that the awards ranged from $350 to about $500,000, and that larger awards may skew the data for the year in which the Forest Service made those awards or settlements. For example, in 2010, one payment accounted for over $400,000 of the $1.1 million (about 36 percent) in total awards. Our April 2012 report found that in March 2009, the Forest Service began tracking EAJA payments under a separate accounting code in the USDA financial database, in addition to the Excel spreadsheet. These data show that the Forest Service paid about $2.3 million in 32 cases from March 2009 through September 2010. In April 2013, the Forest Service publicly reported information on the agency’s EAJA attorney fee payments. Specifically, in its fiscal year 2014 budget justification, the Forest Service reported that it had 15 EAJA cases in fiscal year 2011 (awarding about $1.5 million) and 11 cases in 2012 (awarding about $565,000). According to the Forest Service’s fiscal year 2015 budget justification, the agency had 18 EAJA cases in 2013, awarding about $1.6 million. In April 2012, we also reported that Treasury maintains certain data on some USDA cases involving attorney fee payments. In judicial cases where payments from the Judgment Fund are authorized, DOJ officials submit the payment information to Treasury using standardized forms, and Treasury processes the payment and typically informs relevant agencies when it releases the payment to the payee. Specifically, we found that Treasury made 187 payments totaling $16.9 million on behalf of USDA from March 2001 through September 30, 2010. These payments were most frequently made in connection with litigation brought under the Equal Credit Opportunity Act, Title VII of the Civil Rights Act of 1964, FOIA, or the Endangered Species Act, as shown in table 1. Treasury made 88 of the 187 payments as a result of a class action lawsuit on behalf of black farmers alleging discrimination. In April 2012, we also reported on the amount and number of payments Treasury made on behalf of USDA, by fiscal year, as shown in figure 3. Specifically, figure 3 shows that Treasury made the greatest number of payments on behalf of USDA in fiscal year 2005 (24 payments) and Treasury paid the highest amount of attorney fees and costs on behalf of USDA in 2003 ($3 million). We found that the payments ranged from about $175 to about $1.1 million, and that larger payments may skew the data for the year in which Treasury made those payments. For example, in 2008, one payment totaling about $1.1 million accounted for about half of the $2.3 million in total payments. Further, 11 of the 13 fiscal year 2010 cases were payments stemming from a class action lawsuit filed by black farmers and made up about $1.5 million of the $1.6 million in payments for that year. In addition, we found in April 2012 that DOJ maintains certain data on some USDA cases involving attorney fee payments, but DOJ’s data are not readily retrievable or complete. In particular, DOJ has internal agency databases that capture information on individual court cases, but officials stated at the time that these databases do not reliably capture attorney fees and costs. For example, we reported that DOJ officials said that their databases were designed for internal management purposes and not for agency-wide statistical tracking. Over time, some EAJA data have been entered into the databases; however, the agency does not have a mechanism for determining what percentage of total EAJA awards is in the database or if the data were entered consistently. According to a senior DOJ official, DOJ is not required to enter EAJA award data into its database. We concluded that because DOJ handled tens of thousands of cases over the 11-year period on behalf of USDA, we could not readily or systematically review all of the case files for our April 2012 review to determine the attorney fee awards. Other costs includes court costs, such as filing fees and reporting fees, and attorney expenses, such as the cost for expert witnesses, telephone, postage, travel, copying, and computer research. litigation defending EPA.Resources Division’s case management system contains information on the number of hours the division’s attorneys spent working on environmental litigation defending EPA. However, we reported that the U.S. Attorneys’ Office’s database does not contain information on attorney hours worked by case, which meant that in our prior report on EPA litigation, we could not determine the time these attorneys spent on each case. Specifically, the Environment and Natural Chairman Thompson, Ranking Member Walz, and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For further information on this statement, please contact Eileen R. Larence at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Maria Strudwick (Assistant Director), Paul Hobart, Ron La Due Lake, Jessica Orr, Janet Temko, and Ellen Wolfe. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In the United States, parties involved in federal litigation generally pay their own attorney fees. There are many exceptions to this general rule where “fee-shifting” statutes authorize the award of attorney fees to a successful, or prevailing, party. Some of these provisions also apply to the federal government when it loses a case. In 1980, Congress passed EAJA to allow parties that prevail in cases against federal agencies to seek reimbursement from the federal government for attorney fees, where doing so was not previously authorized. Although all federal agencies are generally subject to, and make payments under, attorney fee provisions, some in Congress have expressed concerns about the use of taxpayer funds to make attorney fee payments with agencies' limited funding. These concerns include that environmental organizations are using taxpayer dollars to fund lawsuits against the government, including against USDA. This statement addresses the extent to which USDA had information available on attorney fee claims and payments made under EAJA and other fee-shifting statutes for fiscal years 2000 through 2010. This statement is based on GAO's April 2012 report on USDA and the Department of Interior attorney fee claims and payments and selected updates conducted in March 2014. To conduct the updates, among other things, GAO reviewed Forest Service budget documents for fiscal years 2014 and 2105 and interviewed Forest Service officials. In April 2012, GAO found that the Department of Agriculture (USDA) did not report any aggregated data on attorney fee claims and payments made under the Equal Access to Justice Act (EAJA) and other fee-shifting statutes for fiscal years 2000 through 2010, but USDA and other key departments involved—the Departments of the Treasury and Justice—maintained certain data on individual cases or payments in several internal agency databases. However, collectively, these data did not capture all claims and payments. USDA officials stated at the time that given the decentralized nature of the department and the absence of an external requirement to track or report on attorney fee information, the information was not centrally tracked and decisions about whether to track attorney fee data and the manner in which to do so were best handled at the agency level. Officials from 29 of the 33 USDA agencies GAO contacted for its April 2012 report stated that they did not track or could not readily provide GAO with this information. The remaining 4 USDA agencies had mechanisms to track information on attorney fees, were able to compile this information manually, or directed GAO to publicly available information sources. GAO found that the Forest Service was the only program agency within USDA that was able to provide certain attorney fee data across the 11-year period. GAO reported in April 2012 that about $16.3 million in attorney fees and costs in 241 environmental cases from fiscal years 2000 through 2010 was awarded against or settled by the Forest Service (see fig. below). Note: Forest Service data may include attorney fees authorized by underlying statutes, EAJA subsection (b), and EAJA subsection (d); as such, some funds may have been paid by the Judgment Fund, as opposed to agency appropriations. However, the extent to which the 4 USDA agencies had attorney fee information available for the 11-year period varied. Given this limitation as well as others, such as inconsistent availability of payment data, GAO concluded that it was difficult to comprehensively determine the total number of claims filed for attorney fees, who received payments, in what amounts, and under what statutes. GAO did not make any recommendations in its April 2012 report.
PPACA contained several provisions with the potential to affect issuer participation in the individual and small-group health insurance markets starting in 2014. Specifically, it directed states to establish exchanges for individuals and small businesses by January 1, 2014. In states electing not to establish and operate either type of exchange, PPACA required the federal government to establish and operate the exchange. For 2014, about one-third of the states chose to operate their individual and small- business exchanges, known as state-based exchanges. Specifically, in 17 states, the state chose to operate both the individual exchange and the small-business exchange, known as the Small Business Health Options Program, or SHOP. In 32 states, the federal government operated both the individual and small-business exchanges, known as federally facilitated exchanges. In the remaining 2 states, the individual exchange was federally facilitated, while the small-business exchange was state-based. (See fig. 1.) All individual and small-business health plans, whether offered through an exchange or outside of an exchange, must comply with new insurance market reforms enacted under PPACA. These include, for example, a requirement to cover certain categories of benefits at standardized levels of coverage, which are categorized by “metal level” as catastrophic, bronze, silver, gold, or platinum, depending on the portion of health care costs expected to be paid by the health plan. They also include prohibitions on annual and lifetime limits on the dollar value of required benefits and on the denial of coverage or charging of higher premiums due to preexisting conditions. Some of these reforms were in effect in 2012, while others did not take effect until 2014. To be certified to offer coverage through an exchange, issuers must meet additional requirements, including offering a minimum of one silver and one gold plan in any area in which it participates in an exchange. Catastrophic plans may be offered only in the individual exchanges, and not on the small-business exchanges, and may be offered only to certain individuals. Issuers can offer plans statewide or within different geographic regions, typically defined by rating areas. States could define their rating areas using counties, Metropolitan Statistical Areas,codes, or a combination of those options. PPACA also requires that the federal government establish multi-state plans to be offered through each of the individual and small-business exchanges. Specifically, multi-state plans are those that issuers offer under a contract with the U.S. Office of Personnel Management (OPM). PPACA requires the contracted issuers to offer at least two multi-state plans through each exchange and requires that at least one of the issuers with which OPM contracts be not-for-profit. Issuers of multi-state plans are allowed to phase in coverage, but must offer coverage in 60 percent of states in the first contract year and in all states by the third contract year. During the first two contract years, OPM indicated that it would not direct issuers to participate in particular states. For 2014, OPM entered into a contract with the Blue Cross and Blue Shield Association to offer coverage in 31 states through the individual and small-business exchanges. PPACA also established a program to foster the creation of consumer- governed, not-for-profit issuers of health coverage—referred to as Consumer Oriented and Operated Plans (CO-OP)—that would provide additional coverage options in the individual and small-business exchanges. As required by PPACA, CMS established a loan program through which qualified not-for-profit issuers could apply for federal funding to help cover startup costs associated with establishing a CO-OP and to help meet states’ solvency and reserve requirements for licensure. A CO-OP issuer may operate within specific areas of a state, statewide, or in multiple states. For 2014, 22 CO-OPs receiving federal loans offered coverage through the exchanges. The federal government and states instituted other provisions that relate to issuer participation in the exchanges. For example, for federally facilitated exchanges, CMS requires that issuers with more than 20 percent market share in the small-group market participate in the small-business exchange as a condition of participation in the individual In adopting this requirement, CMS indicated that the purpose exchange. of this requirement is to promote robust participation in the small-business exchanges and expand plan choice.operated their own exchanges enacted statutory, regulatory, or other requirements governing issuer participation in exchanges. Seven states In addition, some states that reported requiring issuers participating in exchanges to offer a minimum number of health plans, while four states set maximum limits. Three states also reported requiring participation in individual and small- business exchanges. For example, in Maryland, certain issuers that offered plans outside of the exchanges were also required to offer plans through the exchange. Further, four states reported establishing waiting periods related to exchange participation in future plan years for issuers that chose not to participate in the 2014 exchange. For example, Connecticut prohibited an issuer from re-entering the exchange for 2 years if the issuer voluntarily ceased to participate in the exchange. (See table 1.) In most states, issuers with the largest share of the 2012 individual and small-group markets participated in the 2014 individual and small- business exchanges. In 2012, a large number of issuers participated in state individual and small-group markets, although coverage was generally concentrated among a small number of these participating issuers. For example, while there was an average of 42 issuers in the 2012 individual market, only 4 had at least a 5 percent share of the market and they accounted for a combined 87 percent of that market. (See fig. 2.) For 2014, in 40 states the issuer with the largest share of the 2012 individual market participated in that state’s 2014 individual exchange. For the small-group market, this was the case in 41 states. In addition to participation from the largest issuer, most states had other issuers with at least 5 percent of the market participating in the 2014 exchanges. For example, in 6 states all of the issuers with at least a 5 percent share of the 2012 individual market also participated in the 2014 exchanges; this was the case for the small-business exchanges in 5 states. However, in some states, issuers with the largest 2012 market share did not participate in the 2014 exchanges. Specifically, in the individual and small-group markets for 11 and 10 states, respectively, the largest issuer in 2012 did not participate in the 2014 exchanges. Among the 11 states for which that was the case for the individual market, there were 5 in which that issuer had the majority of the state’s 2012 market share; for the small-business exchange, this was the case in 3 of the 10 states. Most issuers with less than 5 percent of their 2012 market did not participate in the 2014 exchanges, although in many states more than one of these smaller issuers did participate. In 2012, states had an average of 36 and 15 issuers with less than a 5 percent share of the individual and small-group markets, respectively. Most of these smaller issuers did not participate in the 2014 exchanges. However, in most states, at least 1 of these smaller issuers did participate in the 2014 individual exchanges; in 9 states, 5 or more smaller issuers participated. All smaller issuers that participated in the 2014 individual exchanges had an average market share of 0.6 percent in 2012. In addition, some issuers that participated in the 2014 individual or small- business exchanges had not participated in that respective 2012 market. For example, with regard to the individual market, there were 39 states in which at least 1 issuer that offered coverage through the exchange had not provided coverage in that respective 2012 market. The number of such issuers in each individual exchange ranged from 1 to 11, for a nationwide total of 99 (out of the 291 total issuers that participated in the individual exchanges). Of these, 23 were newly established through the federally funded CO-OP program. Some of the other issuers, however, had previously provided coverage in other markets. For example, in Iowa, we identified 3 issuers that were new to the 2014 individual market. While one of these was a newly established CO-OP, the other two had previously participated in the small-group market in that state. There were also several new issuers that had previously issued coverage in the Medicaid managed care market. For example, one insurance company that issued coverage in multiple Medicaid managed care markets in 2012 was a new entrant to the individual markets through 6 state exchanges. In most states, for 2014 the issuers participating in the exchanges represented a mix of larger issuers, smaller issuers, and issuers new to that market. Overall, the issuers in each 2014 exchange that participated in the 2012 markets represented from 3 to 98 percent of that state’s 2012 individual market and from 1 to 100 percent of that state’s 2012 small- group market, averaging about 56 percent in each market. An average of 4 larger issuers in the individual market and 3 issuers in the small-group market accounted for the majority of these market-share totals in each state and, on average, more smaller issuers—those that had less than 5 percent of the 2012 market—than larger issuers participated in the individual exchanges in 2014. (See fig.3 for the number of participating issuers, by category of 2012 market share. Also, see app. I for state-by- state information on issuer participation in the exchanges and their 2012 market share.) In nearly all states, multiple issuers participated in the individual and small-business exchanges in 2014; an average of 6 issuers and 4 issuers, respectively. Overall, the number of participating issuers varied widely between states; from 1 to 17 issuers through the individual exchange and from 1 to 13 issuers through the small-business exchanges. However, almost all exchanges had more than 1 issuer participating—49 individual exchanges and 45 small-business exchanges. Further, in 31 states, both the individual and small-business exchanges had 3 or more participating issuers. (See fig. 4 and app. II.) More than half of participating issuers offered coverage through both the individual and small-business exchanges, although more issuers participated in the individual exchanges than in the small-business exchanges. Of the 323 issuers that participated in the exchanges, 175 participated in both the individual and small-business exchanges in their state. Of the remaining 148 issuers, 116 participated in the individual exchange only and 32 participated in the small-business exchange only. In 30 states, there were more issuers participating in the individual exchange than the small-business exchange. Most issuers offering coverage through the 2014 exchanges were for- profit entities, although the portion that was not-for-profit was greater than in the 2012 markets. In 2014, about 40 percent of issuers participating in the exchanges were not-for-profit with the remaining 60 percent for-profit. In comparison, in 2012, 9 percent of all issuers were not-for-profit and 91 percent were for-profit. In part, the difference in the percentage of not- for-profit issuers between the 2012 individual market and the 2014 individual exchanges could be because issuers with the largest share of the 2012 market were more likely to be not-for-profit and more likely to In addition, both the multi-state plan and participate in the exchanges. CO-OP programs established under PPACA require inclusion of not-for- profit issuers.coverage through the exchanges, 20 percent were the 22 newly created CO-OPs. In 9 individual and 9 small-business exchanges, a CO-OP represented the only not-for-profit issuer on the exchange. For example, of the 116 not-for-profit issuers offering Issuers varied substantially in the number of plans they offered in each rating area through the 2014 exchanges. In the individual exchanges, each issuer offered an average of 10 plans in each rating area. Of the 291 issuers, 257 offered more than the minimum number of plans required by PPACA—one gold and one silver plan—in all the rating areas in which they offered coverage. Of the 207 issuers in the small-business exchanges, each issuer offered an average of 12 plans in each rating area and 183 offered more than the minimum number of plans required in all the rating areas they offered coverage in. (See table 2 for the range of plans offered. Also see app. III for a state-by-state listing of the number of rating areas and number of plans offered through the exchanges.) Similarly, the total number of plans available to consumers in a given rating area through the exchanges varied greatly. For example, in the individual exchanges, the total number of plans available in a given rating area ranged from 7 to 178, averaging about 41 plans. (See fig. 5 and app. IV.) In all rating areas, consumers could select from at least one bronze, one silver, and two gold level plans and about 58 percent of rating areas had plans available at all five metal levels. However, in the remaining 42 percent of rating areas, consumers did not have access to a platinum plan. The trends for the small-business exchanges were similar, and all rating areas had at least one silver and one gold plan available. While these represent the number of plans available to consumers through the exchanges, the total number of plans available to a consumer in each market would include the additional plans that were available outside the exchanges. These plans were required to meet many of the same requirements as the exchange plans, but individuals enrolling in plans outside the exchanges were not eligible for premium tax credits and cost-sharing reductions to make the coverage more affordable.businesses enrolling in coverage outside of the exchanges were similarly ineligible for tax credits. We received technical comments on a draft of this report from the Department of Health and Human Services and incorporated them as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in Appendix V. In this appendix, we present the 2012 market share of issuers that participated in the 2014 individual and small-business exchanges. Table 3 provides this information for those issuers in the 2014 individual exchanges; table 4 provides this information for those issuers in the 2014 small-business exchanges. Table 5 presents information from interactive figure 4 on the number of issuers participating in the individual and small business exchanges in 2014, by state. Table 6 presents information on the number of plans issuers offered in each state in the individual and small-business exchanges across rating areas in 2014. Table 7 presents information from interactive figure 5 on the number of plans issuers offered in each state for the individual and small-business exchanges in 2014. John E. Dicken, (202)512-7114 or [email protected]. In addition to the contact named above, William Hadley (Assistant Director), Sandra George, Laurie Pachter, Ann Tynan, and Stephen Ulrich made key contributions to this report.
PPACA required by January 1, 2014, the establishment in each state of health insurance exchanges—marketplaces where eligible individuals and small businesses can compare and select among insurance plans. Issuer participation, including the number of plans these issuers offer, is a key factor in the extent of consumer choice offered by the exchanges. GAO was asked to examine the number and types of issuers participating in both the individual and small-business exchanges beginning in 2014, as well as how this compared with issuer participation in the individual and small-group markets prior to the exchanges. In this report, GAO describes (1) the extent to which issuers that previously offered health plans in the individual and small-group markets participated in the exchanges in 2014, and (2) the issuers that participated in 2014 exchanges and the health plans they offered. GAO analyzed data obtained from CMS and states on the health plans offered by issuers that participated in states' exchanges in 2014. GAO also analyzed CMS data on issuers' participation and market share in the 2012 individual and small-group markets, the most recently available national market-wide data. GAO reviewed relevant laws and regulations and interviewed CMS officials to identify federal requirements related to exchange participation. GAO obtained information on state participation requirements from applicable states. Most of the largest issuers of health coverage from 2012 participated in the exchanges that the Patient Protection and Affordable Care Act (PPACA) required be established in all states in 2014. Previously, in 2012, while a large number of issuers participated in state individual and small-group markets, a small number of these participating issuers held a majority of the market share in terms of enrollment. In 2014, for both those exchanges serving the individual market and those serving the small-group market, in more than two-thirds of states the issuer with the largest share of the 2012 market participated in the 2014 exchange. In addition, in most states, other larger issuers with a 5 percent or more share of the 2012 market participated in the 2014 exchanges. Most smaller issuers with less than 5 percent of the 2012 market did not participate in the 2014 exchanges, although in many states more than one of these smaller issuers did participate. In addition, some issuers that participated in a 2014 individual or small-business exchange had not offered coverage in the respective 2012 market, although they may have offered coverage in other markets within the same state. In most states, for 2014, the issuers participating in the exchanges represented a mix of larger issuers, smaller issuers, and issuers new to that market. Multiple issuers participated in nearly all 2014 exchanges and generally offered more health plans than the minimum of two required by PPACA. Overall, the number of participating issuers varied widely between states, from 1 to 17 issuers in the individual exchanges and from 1 to 13 issuers in the small-business exchanges. However, almost all exchanges—49 individual and 45 small-business—had more than one issuer participating. More than half of participating issuers offered coverage through both the individual and small-business exchange in that state, although more issuers participated in the individual exchanges than in the small-business exchanges. Issuers varied substantially in the number of plans they offered; 257 of the 291 issuers in the individual exchanges, and 183 of the 207 issuers in the small-business exchanges, offered more than the minimum number of plans required by PPACA in all the rating areas in which they offered coverage. For both the individual and small-business exchanges, collectively, issuers in the 25 most populous states tended to offer a higher than average number of plans, while those in less populous states were less likely to do so. GAO received technical comments on a draft of this report from the Department of Health and Human Services and incorporated them as appropriate.
HUD’s requirements for HOPE VI revitalization grants are laid out in each fiscal year’s notice of funding availability (NOFA) and grant agreement. NOFAs announce the availability of funds and set forth application requirements and the selection process. Grant agreements are executed between each grantee and HUD and specify the activities, key deadlines, and documentation that grantees must meet or complete. Both NOFAs and grant agreements also contain guidance on resident involvement in the HOPE VI process. For example, the fiscal year 2002 NOFA stated that residents and the broader community should be involved in the planning, proposed implementation, and management of revitalization plans. In additional guidance on resident involvement, HUD encourages grantees to communicate, consult, and collaborate with affected residents and the broader community through resident councils, consultative groups, newsletters, and resident surveys. HUD’s guidance states that residents should be included in all phases of HOPE VI development, but also states that grantees have the final decision-making authority. The majority of HOPE VI grants involve the relocation of residents from a public housing site prior to demolition or rehabilitation. Grantees must conduct the relocation process in accordance with laws such as the Uniform Relocation Act and HUD guidance. Before the relocation process can begin, the grantee must develop a HOPE VI relocation plan that includes the number of families to be relocated, a description of the counseling and advisory services to be offered to families, a description of housing resources that will be used to relocate families, an estimate of relocation costs, and an example of the notice the grantee will provide to residents concerning relocation. Residents are generally given three basic relocation options: (1) using a housing choice voucher (formerly Section 8) to move into the private market, (2) moving to a different public housing site, or (3) leaving federally assisted housing. Revitalized HOPE VI sites often contain fewer public housing units and have more stringent screening criteria. HUD guidance states that grantees must collaborate with residents and other stakeholders to establish criteria that residents must meet in order to return to the site. Residents are not guaranteed that they will automatically return to the site. Typically, grantees offer original residents who remain in good standing the first priority right to return to the revitalized site. Grantees must offer community and supportive services—such as child care, transportation, job training, job placement and retention services, and parenting classes—to all original residents, regardless of their intention to return to the revitalized site. HUD guidance states that services for original residents should begin as soon as possible following the grant award and help residents make progress toward self-sufficiency. Additionally, HUD guidance suggests that grantees offer residents community and supportive services that are specifically designed to help them meet the criteria for their return to the revitalized site. New households that move to the revitalized site also are eligible to receive services. HUD guidance emphasizes that HOPE VI grantees should use case managers to assess the needs and circumstances of residents and then make appropriate referrals to a range of service providers. Grantees must submit to HUD a community and supportive services plan that contains a description of the supportive services that will be provided to residents, proposed steps and schedules for establishing arrangements with service providers, plans for actively involving residents in planning and implementing supportive services, and a system for monitoring and tracking the performance of the supportive services programs, as well as resident progress. According to HUD data, the largest percentage of residents living at HOPE VI sites were relocated to other public housing. Because HUD has not always required grantees to track original residents during the development process, housing authorities lost track of some original residents. Overall, grantees estimated that 46 percent of the original residents would return to the revitalized sites. However, the percentage of original residents expected to return varied greatly from site to site. Several factors may affect planned and actual reoccupancy rates, including the planned mix of units and the criteria used to screen the occupants of the new units. As shown in figure 1, a majority of the almost 49,000 residents that had been relocated from HOPE VI sites, as of June 30, 2003, moved to other public housing (about 50 percent) or received vouchers (about 31 percent). Additionally, approximately 6 percent were evicted, and about 14 percent were classified as “other,” which includes either residents who moved without giving notice or who moved out of public housing. Grantees lost track of some original residents for a number of reasons. HUD did not emphasize the need to track original residents until 1998 and did not require grantees to report the location of residents until 2000. Also, four of the 1996 grantees we interviewed stated that it was difficult to track residents who had left federally-assisted housing (i.e., were no longer in public housing or using a voucher.) In a June 2002 report to Congress, HUD acknowledged that efforts to track original residents during the development process had been uneven and stated that the agency and grantees were working to improve resident tracking. All but one of the 1996 grantees developed some means of tracking original residents, although three stated that they only tracked a subset of original residents, such as those still in public housing or using a voucher. The Housing Authority of Louisville created a database to track residents and used it to determine the status of the 1,304 families that resided at Cotter and Lang Homes prior to relocation. The housing authority concluded that 65 percent had been relocated to other public housing or given vouchers, and 33 percent had vacated Cotter or Lang prior to being relocated. It could not determine if the remaining 2 percent had been relocated or vacated prior to relocation. In addition, two 1996 grantees took steps to locate original residents with whom they had lost contact. The Chicago Housing Authority hired a consultant to help it find relocated residents. To track down those original residents that did not remain in public housing or take a voucher, the Spartanburg Housing Authority posted public notices stating that the authority was trying to track down original residents and held meetings to get their addresses. Overall, grantees estimated that 46 percent of all the original residents of HOPE VI sites would return to the revitalized sites. However, as shown in figure 2, the percentage of original residents that were expected to return varied greatly from site to site. For example, at the 113 sites where reoccupancy was not yet complete, the planned reoccupancy rate was less than 25 percent at 23 sites; in contrast, the planned rate was 75 percent or greater at 24 sites. At the 39 sites where reoccupancy was complete, the actual reoccupancy rate was less than 25 percent at 17 sites and 75 percent or greater at 7 sites. Also, the percentage of residents that were expected to return decreased over time. As of September 30, 1999 (the earliest date for which we could obtain data), fiscal year 1993–1998 grantees estimated that 61 percent of the original residents would return to the revitalized sites. By June 30, 2003, the same grantees estimated that 44 percent of the original residents would return. Several factors may affect planned and actual reoccupancy rates, including the mix of units. To reduce the concentration of poverty at HOPE VI sites, HUD recommends a mix of public housing, affordable housing (low- income housing tax credit or other subsidized housing), and market-rate housing. As a result, grantees, as of June 30, 2003, had demolished or planned to demolish 76,393 public housing units and rebuild or renovate 44,781 replacement public housing units. At the 1996 sites, the percentage of public housing units being replaced ranged from 10 percent to 102 percent (see fig. 3). Resident and low-income housing advocates have criticized the HOPE VI program for reducing the number of public housing units. However, HUD, in its June 2002 report to Congress, pointed to the number of affordable units and vouchers that the program would provide. HUD also noted that over 20,000 of the units to be demolished were long- standing vacancies when the housing authorities applied for a HOPE VI grant, and that a majority of the vacant units were uninhabitable. As shown in figure 4, the percentage of revitalized units that are public housing units varied from site to site. Among the 143 sites where construction was not yet complete, as of June 30, 2003, public housing units constituted less than 50 percent of total units at 69 sites. At all but three of the 22 sites where construction was complete, 50 percent or more of the units were public housing units. Additionally, the number of planned public housing units decreased over time. As of September 30, 1999 (the earliest date for which we could obtain data), fiscal year 1993– 1998 grantees estimated that they would construct 34,199 public housing units. By June 30, 2003, the same grantees estimated that they would construct 30,772 public housing units—about a 10-percent decrease. (This decrease in the number of planned public housing units may help explain why the percentage of residents that the grantees expected to return decreased over time, as discussed previously in this report.) Another factor that may affect reoccupancy is the screening criteria that original residents must meet to return to the revitalized sites. HUD allows grantees to determine the screening criteria for each site. Consequently, the screening criteria varied at the 1996 sites we visited. For example, the Charlotte Housing Authority required returning Dalton Village residents to participate in the family self-sufficiency program. Residents that do not successfully complete the program within 5 years and are not in violation of their lease will be transferred to another public housing site. In addition to participation in the family self-sufficiency program, the Spartanburg Housing Authority required returning Tobe Hartwell residents to agree to random drug testing. In contrast, there were no special criteria at some sites. In Tucson, there were no new screening criteria for the original residents of the Connie Chambers site. Under a settlement agreement, all of the residents of Henry Horner Homes in Chicago, Illinois, were eligible to return. Other factors that may affect reoccupancy include resident preferences and the time between relocation and completion of construction of the new units. According to three of the 1996 grantees, some relocated residents did not want to return to the revitalized sites because they preferred a voucher or were satisfied at their new location. Another 1996 grantee observed that, because of the length of time between relocation and construction, some residents did not want to move again. For the 1996 grantees, the average time between the completion of relocation and the projected or actual completion of construction was 86 months (times ranged from 26 months to 129 months). The extent to which grantees involved residents in the HOPE VI process has varied at the 1996 sites. HUD has provided guidance on resident involvement in its NOFAs and grant agreements and on its Web site. The 1996 grantees have taken a variety of steps to involve residents in the HOPE VI process, ranging from holding informational meetings and soliciting input to involving residents in major decisions. HUD’s guidance on resident involvement in the HOPE VI process consists of annual NOFAs and grant agreements, as well as information located on its Web site. For example, the fiscal year 2002 NOFA stated that residents should be involved in the planning, proposed implementation, and management of revitalization plans. The NOFA required that, prior to applying for a HOPE VI revitalization grant, housing authorities conduct at least one training session for residents on the HOPE VI development process and at least three public meetings with residents and the broader community to involve them in developing revitalization plans and preparing the application. The fiscal year 2002 grant agreement (between HUD and the winning applicants) stated that grantees were required to foster the involvement of, and gather input and recommendations from, affected residents throughout the entire development process. Specifically, grantees were responsible for, among other things, holding regular meetings to provide the status of revitalization efforts, providing substantial opportunities for affected residents to provide input, and providing reasonable resources to prepare affected residents for meaningful participation in planning and implementation. HUD’s published guidance on resident involvement provides general guidelines that grantees must meet. For example, it states that full resident involvement is a crucial element of the HOPE VI program. HUD requires grantees to give all affected residents reasonable notice of meetings about HOPE VI planning and implementation and provide them with opportunities to give input. The guidance states that, at a minimum, grantees are required to involve residents throughout the entire HOPE VI planning, development, and implementation process and to provide information and training so that residents may participate fully and meaningfully throughout the entire development process. Although grantees are required to solicit and consider input from residents, the guidance makes it clear that the grantees have final decision-making authority. The amount and type of resident participation varied at the 1996 sites. All of the 1996 grantees held meetings to inform residents about revitalization plans and solicit their input. For example, residents of Dalton Village in Charlotte and Bedford Additions in Pittsburgh were asked to provide input on the design plans for the new sites. As the following examples illustrate, some of the grantees we visited took additional steps to seek a greater level of resident involvement in the HOPE VI process: In Tucson, the housing authority first asked residents to vote on the revitalization plan for the Connie Chambers site. Only after the residents expressed their support for the plan did the mayor and city council vote to submit the plan to HUD. The Chicago Housing Authority formed working groups at each of its HOPE VI sites to solicit input on plans and the selection of developers. These groups include representatives from the resident council, the housing authority, and city agencies. The Cuyahoga Metropolitan Housing Authority’s plans for its Riverview/Lakeview grant involved acquiring 54 off-site public housing units and, in many cases, the residents to be relocated selected the single-family homes that the housing authority then purchased for them. The Jacksonville and Chester Housing Authorities worked with residents to develop screening criteria used to select the occupants of the new development. The Holyoke Housing Authority asked residents to be part of its HOPE VI Implementation Team and the mayor’s HOPE VI Advisory Task Force. At one site we visited, the resident leader stated that residents were not adequately involved early in the HOPE VI process. Not until the residents at Robert Jervay Place in Wilmington, North Carolina, sent a letter to HUD describing the lack of progress at the site did the housing authority start moving forward with the project and involving residents in design meetings. In some cases, litigation or the threat of litigation has led to increased resident involvement. Due to a settlement agreement, any decisions regarding the revitalization of Henry Horner Homes in Chicago are subject to the approval of the Horner Resident Committee. According to the president of the St. Thomas resident council, the Housing Authority of New Orleans agreed to provide an additional 100 off-site public housing eligible rental units and change the screening criteria so that most of the original residents would be able to return in response to petitions filed by the attorney for St. Thomas residents with HUD’s Office of Fair Housing and Equal Opportunity. Grantees have provided a variety of community and supportive services, including case management and direct services such as job training programs. HUD data and information obtained during our site visits suggest that the supportive services yielded at least some positive outcomes. However, the data are limited and do not capture outcomes for all programs or reflect all services provided. Also, we could not determine the extent to which the HOPE VI program was responsible for these outcomes. Grantees are using HOPE VI and other funds to provide a variety of community and supportive services, including case management and direct services such as job training programs. In our November 2002 report on HOPE VI financing, we reported that the housing authorities that had been awarded grants in fiscal years 1993–2001 had budgeted a total of about $714 million for community and supportive services. In addition to their HOPE VI funds, grantees are encouraged to obtain in-kind, financial, and other types of resources necessary to carry out and sustain supportive service activities from organizations such as local boards of education, public libraries, private foundations, nonprofit organizations, faith-based organizations, and economic development agencies. Of the $714 million budgeted for community and supportive services, $418 million were HOPE VI funds (59 percent), and $295 million (41 percent) were leveraged funds. Although the majority of funds budgeted overall for supportive services were HOPE VI funds, we noted that the amount of non-HOPE VI funds budgeted for supportive services had increased since the program’s inception. In recent years, HUD has stressed the importance of grantees using community and supportive services funding to provide case management services to residents. In fact, all of the 1996 grantees have used the case management approach. For example, the Housing Authority of New Orleans hired a social service provider located near the St. Thomas site to perform assessments and provide case management plans for residents. The Holyoke Housing Authority has three case managers, who help residents of its 1996 grant site find employment, acquire General Educational Development (GED) certificates, take English as a Second Language courses, and receive homeownership counseling. The Chester Housing Authority established a “one-stop shop” at a local hospital, which serves as the coordinating point for all programs and partners servicing the authority’s residents. Grantees have also used funds set aside for community and supportive services to construct facilities where services are provided by other entities. For example, the Charlotte Housing Authority spent $1.5 million in HOPE VI funds to construct an 11,000-square-foot community and recreational center consisting of a gymnasium, four classrooms, and a computer lab near its 1996 grant site. In exchange, the residents annually will receive $60,000 in services from the center, which is run by the Mecklenburg County Parks and Recreation Department. The Tucson Community Services Department, which serves as Tucson’s public housing authority, used some of its 1996 HOPE VI funds to fund the construction of a child development center and learning center. Two day care programs— one operated by Head Start and the other by a local nonprofit organization—are operating in the child development center, and a computer library run by the Tucson-Pima Public Library is operating in the learning center. The Spartanburg Housing Authority used a portion of its HOPE VI funds to build a community center containing a computer center, health clinic, and gymnasium. The Spartanburg Technical College provides adult and student computer training, and the University of South Carolina Spartanburg School of Nursing performs health assessments and tracking at the center. Grantees also provided direct services such as computer and job training. For instance, the New York City Housing Authority instituted a computer incentive program that provides a personal computer system to Arverne and Edgemere residents who either work 96 hours volunteering on HOPE VI recruiting and other HOPE VI activities or who participate in a HOPE VI training program. HOPE VI residents enrolled in the San Francisco Housing Authority’s family self-sufficiency program can receive up to $1,200 per household to participate in training for various trades. The Detroit Housing Commission formed a number of partnerships to provide training in retail sales, computers, manufacturing, and child care to Herman Gardens residents. For example, 18 different unions formed a partnership that offers a preapprenticeship program. Limited HUD data on all 165 grants awarded through fiscal year 2001 and information collected during our visits to the 1996 sites indicated that HOPE VI community and supportive services have achieved or contributed to positive outcomes. We recommended in July 1998 that HUD develop consistent national, outcome-based measures for community and support services at HOPE VI sites. Since June 2000, HUD has used its HOPE VI reporting system to collect data from grantees on the major types of community and supportive services they provide and the outcomes achieved by some of these services. HUD collects data on services provided to both original and new residents. According to the data, as of June 30, 2003, for the 165 sites awarded grants through fiscal year 2001, about 45,000 of the approximately 70,000 original residents potentially eligible for community and supportive services made up the grantees’ caseload. The remaining original residents were not part of the caseload because, among other things, they declined or no longer needed services, or the grantee could not locate them. Additionally, about 8,000 new residents were included in the grantees’ caseload, bringing the total to approximately 53,000. As shown in table 1, the community and supportive services programs in which the most residents enrolled, as of June 30, 2003, were employment and counseling programs. HUD also collects data on the number of residents that have completed certain of these programs. For example, about 55 percent of the residents that enrolled in job skills training programs, as of June 30, 2003, completed the program. About 35 percent of the residents that signed up for high school or equivalent education classes completed them. HUD also collects data on selected outcomes such as employment and homeownership, although the outcomes cannot always be attributed to participation in or completion of HOPE VI programs or services. Other factors such as welfare-to-work requirements may have contributed to these outcomes. The data collected, as of the quarter ending June 30, 2003, showed that over 1,000 residents obtained jobs in that quarter. Overall, 22 percent of the grantees’ caseload was employed, and 16 percent had been employed 6 months or more. In addition, 344 resident-owned businesses had been started, as of June 30, 2003, and 967 residents had purchased a home. HUD has made modifications to the community and supportive services data that it collects and worked with grantees to help them better understand their reporting responsibilities. Seven of the 1996 grantees stated that they were not always certain about what to report, and 11 stated that the system did not reflect some of the services, such as those for youth and seniors, that they provided. To improve reporting, HUD hired the Urban Institute to help identify reporting problems and make refinements to the system. Also, HUD staff and one of two outside technical assistance providers review the data provided each quarter for consistency. As a result, the data are more reliable now than they were initially, according to the HOPE VI official that oversees community and supportive services. The same official stated that, while HUD encourages grantees to provide services to youth and seniors, it does not collect data on these services in order to limit the reporting burden on grantees. Limited data collected during our site visits also suggest that community and supportive services have helped achieve some positive outcomes. For example, the Housing Authority of the City of Pittsburgh offered in-home health worker training courses, in which 49 Bedford Additions residents have participated since October 2000. Thirty-one of the 49 participants obtained employment, and 12 were still employed, as of September 2003. In Louisville, 114 former Cotter and Lang residents had enrolled in homeownership counseling, as of June 2003, 41 had completed the counseling, and 34 had purchased a home. Between January and June 2003, 76 St. Thomas residents in New Orleans got a job, 12 residents got a GED, and 5 residents became homeowners. Finally, residents of Arverne and Edgemere Houses in New York City had earned 242 computers, as of July 2003, as part of the computer incentive program described in the previous section. According to our analysis of census and other data, the 20 neighborhoods in which the 1996 HOPE VI sites are located have experienced improvements in a number of indicators used by researchers to measure neighborhood change, such as educational attainment levels, average household income, and average housing values. However, for a number of reasons, we could not determine the extent to which the HOPE VI program was responsible for these changes. For example, we relied primarily on decennial census data (adjusted for inflation), comparing measures from 1990 with those of 2000. However, the HOPE VI sites were at varying stages of completion in 2000. We also used data available under the Home Mortgage Disclosure Act for numbers of home mortgage originations in 1996 and 2001. Further, a number of factors—such as changes in national or regional economic conditions—can influence the indicators we compared. In an attempt to more directly gauge the influence of the HOPE VI program, we compared each of four selected HOPE VI neighborhoods with a comparable non-HOPE VI public housing neighborhood in the same city. Some variables indicated greater improvements in the HOPE VI neighborhoods than their comparable neighborhoods, such as in mortgage lending activity, but other variables indicated inconsistent results among the sites. We also found that the demolition of old public housing alone may influence changes in neighborhoods. Analysis of six HOPE VI neighborhoods where the original public housing units have been demolished, but no on-site units have been completed, also shows improvements in educational attainment, unemployment rates, income, and housing. Finally, other studies have shown similar findings in HOPE VI neighborhoods. The 20 neighborhoods in which the 1996 HOPE VI sites are located have experienced positive changes in education, income, and housing indicators as measured by comparing 1990 and 2000 Census and 1996 and 2001 HMDA data. When using census data, we defined the neighborhood as consisting of the census block group or groups in which a public housing site is located and the immediately adjacent census block groups. When using HMDA data, which is not available at the census block group level, we defined the neighborhood as the census tract in which a public housing site is located (see app. II). Since 2000 data is the most recent census data available, it reflects the neighborhood conditions at the 1996 HOPE VI sites, which were at various stages of completion, at that time. Finally, not all of the changes in census data from 1990 to 2000 were statistically significant (see app. III). Moreover, at five sites, revitalization work had begun prior to receipt of HOPE VI funds with various non-HOPE VI funding sources. As a part of its fiscal year 2001 and 2002 performance goals, HUD specified that neighborhoods with substantial levels of HOPE VI investment would show improvements in such dimensions as household income, employment, homeownership, and housing investment. As a result, we used similar indicators, as well as other indicators generally used by researchers, to analyze neighborhood changes. However, it was not possible to determine the extent to which the HOPE VI program was responsible for the changes in these neighborhoods. Many factors, such as national and regional economic trends, can also affect neighborhood conditions. According to experts, it is extremely rare for any one program or actor to be able to change a neighborhood single-handedly. Our analysis of census and HMDA data for the 20 1996 HOPE VI neighborhoods showed the following positive changes: In 18 of the 1996 HOPE VI neighborhoods, the percentage of the population with a high school diploma or equivalent increased, from a minimum of 4 percentage points in Detroit to a maximum of 21 percentage points in Baltimore. In 11 of the HOPE VI neighborhoods, the percentage of the population with an associate’s degree or better increased, from a minimum of 3 percentage points in Tucson to a maximum of 14 percentage points in San Francisco. Average household income increased in 15 of the 1996 HOPE VI neighborhoods, from a minimum of 18 percent in Detroit to a maximum of 115 percent in Chicago (Henry Horner). The percentage of the population in poverty decreased in 14 of the HOPE VI neighborhoods, from a minimum of 4 percentage points in Atlanta and Detroit to a maximum of 20 percentage points in Baltimore. Despite these decreases, 9 of the HOPE VI neighborhoods remained “high-poverty neighborhoods” (having poverty rates of 30 percent or more), and 5 remained “extremely high-poverty neighborhoods” (having poverty rates of 40 percent or more). According to the Urban Institute, areas where 30-40 percent of the population lives in poverty represent significantly more deteriorated and threatening living environments than those with poverty rates below those thresholds. Average housing values increased in 13 of the 20 HOPE VI neighborhoods, ranging from a minimum of 11 percent in Tucson to a maximum of 215 percent in Chicago (Henry Horner). It is generally accepted among researchers that housing values represent the best available index of expectations regarding future economic activity in an area. Rental housing costs increased in 15 of the HOPE VI neighborhoods, from a minimum of 9 percent in Tucson to a maximum of 61 percent in Louisville. Increasing rental-housing costs are an indication that there is a greater demand for housing in that area. The number of mortgage loans originated in 10 of the HOPE VI neighborhoods increased between 1996 and 2001. These increases ranged from a minimum of 21 percent in Holyoke to a maximum of 728 percent in Charlotte—where the number of loans originated increased from 7 to 58. However, some of the HOPE VI neighborhoods showed negative changes for certain indicators. For example: The percent unemployed rose at 4 of the 20 sites, from a minimum of 2 percentage points in Charlotte to a maximum of 8 percentage points in Kansas City. In the Holyoke HOPE VI neighborhood, average housing values declined by 26 percent. The number of mortgage loans originated in seven of the HOPE VI neighborhoods decreased between 1996 and 2001. These decreases ranged from a minimum of 5 percent in Atlanta to a maximum of 58 percent in Wilmington. Appendix III shows the census and HMDA data for all of the indicators we analyzed for each of the 1996 HOPE VI sites. Comparison of four HOPE VI neighborhoods with neighborhoods in which comparable public housing sites are located (comparable neighborhoods) showed that HOPE VI neighborhoods experienced greater positive changes in some, but not all, of the variables that we evaluated. We conducted this comparative analysis to attempt to better isolate the effects of the HOPE VI program, although it was not possible to directly link changes to the HOPE VI program (see app. II). In addition, 2000 census data may not reflect some of the changes that could occur over time in these neighborhoods because the demolition and new construction at these sites did not begin until the late 1990s. Moreover, in these four HOPE VI neighborhoods, the units put back on-site were all public housing and, thus, not representative of the majority of HOPE VI projects, which are mixed-income. Three of the four HOPE VI neighborhoods experienced greater increases in mortgage loan originations than their comparable neighborhoods, according to HMDA data. From 1996 to 2001, the percentage of loans originated for home purchases increased 25 percent in Kansas City, 50 percent in Jacksonville, and 166 percent in Chester, while the percentage decreased in the comparable neighborhoods. In Spartanburg, the percentage of mortgage loans originated in the HOPE VI neighborhood decreased 33 percent, in contrast to a 46-percent decrease in the comparable neighborhood. While crime data summaries were not available at the neighborhood level, we were able to obtain crime data summaries for each of the sites being compared. Available crime data summaries show that three of the four HOPE VI sites experienced greater decreases in crime than their comparable sites (see app. III). Although incidents of crime generally decreased at both the HOPE VI and comparable site in Spartanburg, South Carolina, they decreased to a greater extent at the HOPE VI site. In both Chester, Pennsylvania, and Jacksonville, Florida, crime decreased at the HOPE VI sites while it increased at the comparable sites. In contrast, crime incidents at Kansas City’s HOPE VI site have generally increased. According to officials from the Housing Authority of Kansas City, Missouri, in 1996 the HOPE VI site had a 24-percent occupancy rate because most of the residents had already been relocated, and a 98-percent occupancy rate in 2002. They attribute the increase in crime during this time period to this increased occupancy rate. Officials from the Housing Authority of Kansas City, Missouri, also reported that crimes per household decreased from .51 to .31 at the HOPE VI site from 1996 to 2002. Comparison of census data for the HOPE VI and comparable neighborhoods between 1990 and 2000 showed some positive and some negative changes. However, we were able to compare only a small number of variables because the differences in the changes for others were not statistically significant (see fig. 5). Kansas City, Missouri, had the largest number of statistically significant differences between the HOPE VI and comparable neighborhoods; specifically, in Kansas City, the differences between the HOPE VI and comparable neighborhood were statistically significant for four variables. It should be noted that Kansas City’s HOPE VI neighborhood has been changing for a longer period of time than the other HOPE VI neighborhoods in our analysis. The first phase of revitalization in this neighborhood began in May 1995 with non-HOPE VI funds, whereas the other three sites in our comparative analysis did not begin revitalization activities until after being awarded HOPE VI revitalization grant funds in 1996. While the Kansas City HOPE VI neighborhood experienced greater positive changes in new construction, it also experienced a greater increase in unemployment, and a greater decrease in the percentage of the population with both a high school diploma and an associate’s degree or better. In Spartanburg, South Carolina, the differences between the HOPE VI and its comparable neighborhood were statistically significant for two variables. The percentage of the population with a high school diploma increased to a greater extent in the HOPE VI neighborhood than the comparable neighborhood, while new construction decreased to a greater extent in the HOPE VI neighborhood relative to the comparable neighborhood. In addition, in Jacksonville, Florida, the HOPE VI neighborhood experienced a greater increase in new construction relative to its comparable neighborhood. Even in the six 1996 HOPE VI neighborhoods where the original public housing units had been demolished, but no units had been completed on- site as of December 2002, a comparison of 1990 and 2000 Census data showed that positive changes occurred in educational attainment, unemployment, income, and housing. For example: The percentage of the population with a high school diploma or equivalent increased in all six neighborhoods, from a minimum of 4 percentage points in Detroit to a maximum of 21 percentage points in Baltimore. Similarly, the percentage of population with an associate’s degree or better increased in five neighborhoods, from a minimum of 4 percentage points in Atlanta and Baltimore to a maximum of 14 percentage points in San Francisco. The unemployment rate decreased in four of the six HOPE VI neighborhoods, from a minimum of 4 percentage points in Cleveland to a maximum of 6 percentage points in Baltimore and Detroit. Average household income increased in all of the neighborhoods, from a minimum of 18 percent in Detroit to a maximum of 46 percent in Cleveland. The poverty rate decreased in five of the neighborhoods, from a minimum of 4 percentage points in Atlanta and Detroit to a maximum of 20 percentage points in Baltimore. Average housing values increased in four of the neighborhoods, from a minimum of 26 percent in Atlanta to a maximum of 116 percent in Detroit. The percentage of occupied housing units increased in four neighborhoods, from a minimum of 4 percentage points in Atlanta to a maximum of 8 percentage points in Cleveland and New Orleans. Rental-housing costs increased in all six neighborhoods, from a minimum of 11 percent in New Orleans to a maximum of 38 percent in Baltimore. In contrast, the level of mortgage lending activity decreased in four of these neighborhoods from a range of 5 percent in Atlanta to 57 percent in Baltimore between 1996 and 2001. Moreover, new construction decreased by 2 percentage points in Cleveland, New Orleans, and San Francisco between 1990 and 2000. We cannot attribute these changes solely to the HOPE VI program. To the extent that they do reflect the program’s influence, however, they suggest that demolition of old, deteriorated public housing alone may influence surrounding neighborhoods. For example, average housing value and average household income increased even though no new units had been constructed. It is possible that the HOPE VI program influenced these indicators by removing blight from the neighborhoods and temporarily relocating large numbers of low-income households during demolition. Studies by housing and community development researchers have shown positive changes in HOPE VI neighborhoods as reflected in income, employment, community investment, and crime indicators. In reviewing the literature, we identified one report that discussed changes in the neighborhoods surrounding eight HOPE VI sites and two reports that evaluated changes at two of the sites we visited. While each study covered a small number of HOPE VI neighborhoods, they showed positive changes: Per capita incomes in eight selected HOPE VI neighborhoods increased an average of 71 percent, compared with 14.5 percent for the cities in which these sites are located between 1989 and 1999. The percentage of low-income households living in eight selected HOPE VI neighborhoods decreased from 82 to 69 percent from 1989 to 1999. Median income increased 80 percent from 1990 to 2000 in Chester’s 1996 HOPE VI neighborhood. Unemployment rates decreased from 24 to 15 percent between 1989 and 1999 in eight selected HOPE VI neighborhoods. The number of small business loans closed in seven selected HOPE VI neighborhoods grew by an average of 248 percent from 1998 to 2001, compared with 153 percent in the neighborhoods’ respective counties. The number of new business licenses issued in Tucson’s 1996 HOPE VI neighborhood increased from 6 in 1996 to 28 in 2002. In addition, the number of business closures in the neighborhood decreased from 23 in 1996 to 15 in 2001. The vacancy rate decreased from 9 to 6 percent between 1990 and 2000 in Chester’s 1996 HOPE VI neighborhood, while the vacancy rate for the city increased from 12 to 14 percent during the same period. Overall and violent crime rates decreased by 48 and 68 percent, respectively, between 1993 and 2001 in four of the HOPE VI neighborhoods where crime data were available. Overall and violent crimes decreased by 25 and 38 percent, respectively, during the same time period for the cities in which these sites are located. We provided a draft of this report to HUD for its comment and review. We received comments from the Assistant Secretary for Public and Indian Housing (see app. IV) who thanked GAO for its thorough review of the HOPE VI program and stated that HUD regards our study as an important tool in its continuing efforts to improve the program. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Chairman, Subcommittee on Housing and Transportation, Senate Committee on Banking, Housing, and Urban Affairs; the Chairman and Ranking Minority Member, Senate Committee on Banking, Housing, and Urban Affairs; the Chairman and Ranking Minority Member, Subcommittee on Housing and Community Opportunity, House Committee on Financial Services; and the Chairman and Ranking Minority Member, House Committee on Financial Services. We also will send copies to the Secretary of Housing and Urban Development and the Director of the Office of Management and Budget. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please call me at (202) 512-8678 if you or your staff have any questions about this report. Key contributors to this report are listed in Appendix V. Our objectives were to examine (1) the types of housing to which the original residents of HOPE VI sites were relocated and the number of original residents that grantees expect to return to the revitalized sites, (2) how the fiscal year 1996 grantees have involved residents in the HOPE VI process, (3) the types of community and supportive services that have been provided to residents and the results achieved, and (4) how the neighborhoods surrounding the sites that received HOPE VI grants in fiscal year 1996 have changed. To accomplish these objectives, we analyzed the data contained in HUD’s HOPE VI reporting system on the 165 sites that received revitalization grants in fiscal years 1993-2001. To assess the reliability of the data in HUD’s HOPE VI reporting system, we interviewed the officials that manage the system; reviewed information about the system, including the user guide, data dictionary, and steps taken to ensure the quality of these data; performed electronic testing to detect obvious errors in completeness and reasonableness; and interviewed grantees regarding the data they reported. We determined that these data were sufficiently reliable for the purposes of this report. For the second and fourth objectives, we then focused on and visited the 20 sites in 18 cities that received HOPE VI revitalization grants in fiscal year 1996. We selected the 1996 grants because they were the first awarded after HUD issued a rule allowing revitalization to be funded with a combination of public and private funds, which has become the HOPE VI model. We also analyzed Census and Home Mortgage Disclosure Act data and reviewed crime data summaries. In addition, we interviewed the HUD headquarters officials responsible for administering the HOPE VI program. To determine the types of housing to which the original residents of HOPE VI sites were relocated and the number of original residents that grantees expect to return to the revitalized sites, we analyzed the relocation and reoccupancy data in HUD’s HOPE VI reporting system. Specifically, we determined what percentage of the original residents that had been relocated as of June 30, 2003, were relocated to other public housing, given vouchers, or evicted. We also determined the percentage of original residents overall and at each site that was expected, as of June 30, 2003, to return to the revitalized sites. At the 113 sites where reoccupancy was not yet complete, we divided the number of original residents the grantee estimated would return by the total number of residents the grantee estimated would be relocated. At the 39 sites where reoccupancy was complete, we divided the number of original residents who actually returned by the total number of residents relocated. We excluded 10 of the 165 sites from our analysis because they did not involve relocation and an additional three sites because the reoccupancy data reported as of June 30, 2003, was incorrect. To determine how reoccupancy estimates changed over time, we compared the data in HUD’s HOPE VI reporting system as of September 30, 1999 (the earliest date for which we could obtain data) with data as of June 30, 2003. Finally, to determine what factors affected whether residents returned to the revitalized sites, we interviewed HUD officials, public housing authority (PHA) officials responsible for managing the fiscal year 1996 grants, and resident representatives at 19 of the 20 fiscal year 1996 sites. For all 165 sites, we used data from HUD’s HOPE VI reporting system to calculate the percentage of revitalized units that would be units under an annual contributions contract (that is, public housing units). At the 143 sites where construction was not yet complete, we divided the number of planned public housing units by the total number of planned units. At the 22 sites where construction was complete, we divided the actual number of public housing units by the total number of units. To determine how the number of planned public housing units has changed over time, we compared the number of public housing units planned as of September 30, 1999 (the earliest date for which we could obtain data) with data as of June 30, 2003. For 19 of the 20 1996 HOPE VI sites, we used data from HUD’s HOPE VI reporting system and data collected during our site visits to calculate the percentage of public housing units being replaced. To determine how the fiscal year 1996 grantees have involved residents in the HOPE VI process, we obtained and reviewed HUD’s guidance on resident involvement, including the portions of the fiscal year 2002 notice of funding availability and fiscal year 2002 grant agreement that address resident involvement. For the 1996 grants, we interviewed PHA officials and resident representatives to determine the extent to which residents had been involved in the HOPE VI process. Finally, we interviewed two resident advocate groups—Everywhere and Now Public Housing Residents Organizing Nationally Together and the National Low Income Housing Coalition—regarding resident involvement and other resident issues. To identify the types of community and supportive services that have been provided to residents of HOPE VI sites, we obtained and reviewed HUD’s draft guidance on community and supportive services. To obtain specific examples of community and supportive services provided at the fiscal year 1996 sites, we interviewed PHA officials and obtained and reviewed community and supportive services plans. We also obtained data from HUD’s HOPE VI reporting system to determine the number of residents that have participated in different types of community and supportive services. To determine the results that have been achieved, we obtained data from HUD’s HOPE VI reporting system on selected outcomes, including the number of new job placements and the number of residents that have purchased homes. We also obtained information during our site visits that documents the results achieved by various community and supportive services programs. To determine how the neighborhoods surrounding the sites that received a 1996 HOPE VI revitalization grant have changed, we analyzed nine key variables from 1990 and 2000 census data for each neighborhood, including the average household income, the percentage of the population in poverty, and the percentage of occupied housing units. When using census data, we defined a “HOPE VI neighborhood” as consisting of the census block group in which the original public housing site was located, as well as all of the immediately adjacent census block groups (see app. II). We also analyzed changes in mortgage lending activity using 1996 and 2001 Home Mortgage Disclosure Act (HMDA) data. These data are only available by census tract, which encompasses a larger area than a census block group. As a result, for this analysis we used only the census tract in which the original public housing site was located as the proxy for the neighborhood. In addition to using the census and HMDA data to analyze changes in the neighborhoods around all 20 1996 HOPE VI sites, we performed additional analysis for those sites where demolition, but no on-site construction, was complete and those that were the closest to completion. To explore the change in neighborhoods where only demolition had occurred, we used the HOPE VI reporting system to identify the six sites that had demolished all of the original public housing units, but not completed any on-site construction as of December 31, 2002. We also used the HOPE VI reporting system to identify the five sites where on-site construction was 75 percent or more complete as of December 31, 2002. We then compared changes in census, HMDA, and summary crime data for four of the five HOPE VI neighborhoods with changes in comparable public housing neighborhoods—neighborhoods containing public housing sites that PHAs identified as similar in condition to the HOPE VI sites prior to revitalization. However, these comparisons are not perfect. For example, in two cases the HOPE VI sites are about 10 years older than the comparable sites. In Chester, Pennsylvania, the comparable site was rehabilitated in 1997, and units were enlarged. We obtained crime data summaries for the Lamokin Village (HOPE VI) and Matopos Hills (comparable) sites from the Chester Housing Authority; for the Durkeeville (HOPE VI) and Brentwood Park (comparable) sites from the Jacksonville Housing Authority; for the Theron B. Watkins (HOPE VI) and West Bluff (comparable) sites from the Housing Authority of Kansas City, Missouri; and for the Tobe Hartwell/Extension (HOPE VI) and Woodworth Homes (comparable) sites from the Spartanburg Housing Authority. Each of these PHAs obtained site-specific crime data summaries from their local police departments. We reviewed the crime data summaries for reasonableness and followed up on anomalies. Because we did not have disaggregated crime data directly from each city’s police department, we were unable to perform tests of statistical significance on the summary crime trends. Although we did not do extensive testing of the summary crime data, we feel that it is sufficiently reliable for the informational purposes of this report. Finally, we obtained and reviewed reports by various universities and private institutions that discussed the social and economic impacts of the HOPE VI program. We focused on one report that discussed changes in the neighborhoods surrounding eight sites and two reports that evaluated changes at sites we visited. See appendix II for more detailed information on the methodology we used to determine how the 1996 HOPE VI neighborhoods have changed. We performed our work from November 2001 through October 2003 in accordance with generally accepted government auditing standards. This appendix provides detailed information on the methodologies we used to analyze neighborhood changes observed in the HOPE VI neighborhoods (for recipients of 1996 HOPE VI revitalization grants) and, where applicable, four comparison neighborhoods. To analyze changes observed in HOPE VI neighborhoods, we first defined HOPE VI neighborhoods as the census block group in which the public housing site was located and the adjacent census block groups. This definition allowed us to examine changes observed in HOPE VI neighborhoods and the extent to which some of the goals of the HOPE VI program may have been addressed, such as improvements in household income, employment, and housing investment. Census block groups were used, as this geographic area was likely to better represent the area of the housing site and its adjacent neighborhood than a larger census entity, such as a census tract, would have. That is, use of block groups lessened the likelihood that both community residents and characteristics that are not influenced by the housing development were included in the analyses. The block groups in which HOPE VI sites were located, and the adjacent block groups, were identified by electronically mapping the addresses using MapInfo. Next, we obtained 1990 and 2000 census data on nine population and housing characteristics for the census block groups in which the HOPE VI sites were located and the adjacent block groups. In order to make valid and reliable comparisons between decennial censuses, we had to ensure that the geographic regions in 1990 and in 2000 shared the same, or nearly the same, physical boundaries and land area. We visually inspected a map of the 1990 boundaries for each of the block groups contained in a HOPE VI neighborhood and compared them with the 2000 boundaries. In some cases, we had to reclassify block groups in order to maintain comparability between the two census years. For example, in 2000, Wilmington's Jervay Place site was located in block group 11002. One of its adjacent block groups, part of the Jervay Place “neighborhood” (as defined by our study), was block group 10001. However, in 1990, the same area constituting block group 10001 had been partitioned into two block groups, 10001 and 10003, of which only 10003 was adjacent to the site block group, 11002. In order to have consistent and comparable geographic areas, we added the respondents of 1990 block group 10001 and their characteristics into the calculations for the 1990 descriptive statistics. We also obtained 1996 and 2001 Home Mortgage Disclosure Act (HMDA) data from the Federal Financial Institutions Examination Council. Specifically, for each of the 20 HOPE VI neighborhoods and the four comparable neighborhoods we compared the number of loans originated in 1996 with the number originated in 2001. However, the smallest geographic unit for which HMDA data are available is the census tract. Therefore, analyses of these data were conducted at the census tract level, and each neighborhood was defined as consisting of the census tract in which the site was located. We reviewed information related to the census data variables and performed electronic data testing to identify obvious gaps in data completeness or accuracy. We determined that the data were sufficiently reliable for use in this report. We conducted a similar review of information related to the HMDA variables. Finally, since we found no issues impacting the use of these data as a result of electronic data testing, we concluded that the data elements being used were sufficiently reliable for the purposes of the report. In evaluating community development initiatives such as HOPE VI, we note that it is difficult to determine the impact of a program or to definitely conclude that a program caused a specified outcome to occur. For example, several factors—such as other community initiatives, re- emphasis on the Community Reinvestment Act (Zielenbach, 2002), or national trends in the economy and unemployment (Zielenbach, 2002)—in conjunction with HOPE VI efforts may have contributed to observed changes in the geographic region surrounding a HOPE VI site. To attempt to isolate the influence of HOPE VI activities, an ideal evaluation research design would include the identification of a neighborhood identical to the HOPE VI community based on key characteristics (such as size, ethnic distribution, income distribution, number of social institutions, crime rates) and ideally use an in-depth, longitudinal case study to track changes in the HOPE VI and comparison communities from the inception of HOPE VI work until its completion. While we recognized that such a method would permit the greatest understanding of community changes and their relationship to HOPE VI, we could not utilize it for two reasons. First, in-depth, longitudinal case studies of multiple HOPE VI sites would be very resource-intensive and were outside the scope of this study. Second, it is unlikely that we or other analysts could identify a series of identical neighborhoods given the natural variation in population, business, and housing development characteristics that occurs within a city over the length of a longitudinal study. Therefore, in an attempt to limit the potential factors that could explain changes observed in HOPE VI communities, we worked with the public housing authorities that managed the four 1996 HOPE VI sites that had completed 75 percent or more of their on-site construction, as of December 31, 2002, to identify comparison neighborhoods. These comparison neighborhoods contained public housing sites that were comparable to the original HOPE VI sites in terms of age, size, or condition. Each of these comparison sites was located in the same city as the HOPE VI site, but had not received any HOPE VI revitalization funding. The decennial nature of the Census also constrained our analysis. The HOPE VI sites were awarded their grants in 1996; however, relocation did not begin at most sites until 1997 or later. Demolition did not begin at over half of the sites until 1999 or later, and construction did not begin at the majority of sites until 2000 or later. In addition, as of June 30, 2003, the majority of sites had not completed construction. Thus, data collected during the 2000 Census may not have detected neighborhood changes. However, the 2000 data are the most current available. Similarly, at the time of our analysis, 2001 HMDA data were the most current available. Despite these limitations, we believe that we have constructed a design that is as methodologically sound as possible given resource and data constraints and the varying stages of implementation of HOPE VI plans. To analyze census data, we selected nine population and housing characteristics. Those characteristics were average household income, percent of population living in poverty, percent unemployed, percent of the population with a high school degree, average housing value, percent of housing units constructed within the last 10 years, percent of occupied housing units, average gross rent, and population total. For the six percentage characteristics, we calculated the percent change by finding the difference between the 1990 sample estimate and the 2000 sample estimate. For the three average characteristics, we calculated the percent change by finding the difference between the 1990 sample estimate and the 2000 sample estimate and then dividing this difference by the 1990 sample estimate. In our comparison of four HOPE VI sites with comparable public housing sites, we also analyzed census data on population totals for each neighborhood. Populations were based on a 100 percent count of the individuals living in the block groups. With the exception of the population total, each of the census variables is based on sample data. Since this sample is only one of a large number of samples that might have been drawn and each sample could have provided different estimates, we express our confidence in the precision of this particular sample’s results as a 95 percent confidence interval (for example, plus or minus 7 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples that could have been drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values for the population. We used the methodology described in the Census Bureau’s documentation for the 1990 and 2000 Censuses (Appendix C: Accuracy of the Data, 1990, and Chapter 8: Accuracy of the Data, 2000) to calculate standard errors and confidence intervals. Essentially, we used the Census Bureau’s formulas to compute the standard error for the sample estimate under the assumption of simple random sampling. We then multiplied this result by a design effect factor to adjust for the survey’s sample design to give the appropriate standard error. In order to determine whether the 1990 and 2000 percent change estimates were statistically significant, we interpreted the confidence interval. For example, if the confidence interval includes zero, then the difference between the 1990 and 2000 estimates is not considered a statistically significant difference. If the confidence interval does not include zero, then the percent change between the 1990 and 2000 estimates is considered statistically significant (see app. III). We also calculated 95 percent confidence intervals to determine whether there are statistically significant differences between the HOPE VI and non-HOPE VI (comparable) neighborhoods on the percent differences from 1990 to 2000. In addition to sampling errors, sample data (and 100 percent data) are subject to nonsampling errors, which may occur during the operations used to collect and process census data. Examples of nonsampling errors are not enumerating every housing unit or person in the sample, failing to obtain all required information from a respondent, obtaining incorrect information, and recording information incorrectly. Operations such as field review of enumerators’ work, clerical handling of questionnaires, and electronic processing of questionnaires also may introduce nonsampling errors in the data. The Census Bureau discusses sources of nonsampling errors and attempts to control in detail. To analyze the HMDA data for each of the 20 HOPE VI sites and the four comparable sites, we compared the number of loans originated in 1996 with the number originated in 2001 (see app. III). The HMDA data contain all of the loans originating in these time periods; therefore, it is not a sample, and confidence intervals did not need to be computed for these data. We obtained 1990 and 2000 census data for each neighborhood in which a 1996 HOPE VI site is located, as well as for four neighborhoods in which public housing that is comparable to selected 1996 HOPE VI sites is located. When using census data, we defined a neighborhood as consisting of the census block group in which a site is located, as well as the adjacent census block groups. We selected nine census data variables, which are generally used by researchers to measure neighborhood change, and analyzed the changes in these variables from 1990 to 2000 (see tables 2 through 4). We also determined whether these changes were statistically significant. For these same 20 HOPE VI and four comparable neighborhoods, we obtained 1996 and 2001 Home Mortgage Disclosure Act (HMDA) data. With this data, we compared the number of loans originated for the purchase of a home in 1996 with the number originated in 2001 (see table 5). When using HMDA data, we defined each neighborhood as consisting of the census tract in which each site is located. We also obtained 1996 and 2002 crime data summaries for each of the four HOPE VI sites that had completed 75 percent or more of their on-site construction (as of December 2002), as well as for four comparable public housing sites. This data was obtained from public housing authority officials and consisted of the total number of crimes that occurred in selected categories. We then calculated the percent change in the total number of crimes over time (see fig. 6). In addition to those individuals named above, Kristine Braaten, Jackie Garza, Catherine Hurley, Grant Mallie, Alison Martin, John McGrail, Sara Moessbauer, Marc Molino, Lisa Moore, Barbara Roesmann, Sidney Schwartz, Paige Smith, Ginger Tierney, and Carrie Watkins made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Congress established the HOPE VI program in 1992 to revitalize severely distressed public housing by demolition, rehabilitation, or replacement of sites. In fiscal years 1993-2001, the Department of Housing and Urban Development (HUD) awarded approximately $4.5 billion for 165 HOPE VI revitalization grants to public housing authorities (grantees). GAO was asked to examine (1) the types of housing to which the original residents of HOPE VI sites were relocated and the number of original residents that grantees expect to return to the revitalized sites, (2) how the fiscal year 1996 grantees have involved residents in the HOPE VI process, and (3) how the neighborhoods surrounding the 20 sites that received HOPE VI grants in fiscal year 1996 have changed. The largest percentage of the approximately 49,000 residents that had been relocated from HOPE VI sites, as of June 30, 2003, were relocated to other public housing, and about half were expected to return to the revitalized sites. Although grantees, overall, expected 46 percent of relocated residents to return, the percentage of original residents that were expected to return (or the reoccupancy rate) varied greatly from site to site. The level of resident involvement in the HOPE VI process varied at the 1996 sites. While all of the 1996 grantees held meetings to inform residents about revitalization plans and solicit their input, some took additional steps to involve residents. For example, in Tucson, the housing authority submitted the revitalization plan for the Connie Chambers site to the city council for approval only after the residents had voted to approve it. The neighborhoods in which 1996 HOPE VI sites are located generally have experienced improvements in indicators such as education, income, and housing, although GAO could not determine the extent to which the HOPE VI program contributed to these changes. In a comparison of four 1996 HOPE VI neighborhoods to four comparable neighborhoods, mortgage lending activity increased to a greater extent in three of the HOPE VI neighborhoods. But, a comparison of other variables (such as education and new construction) produced inconsistent results, with HOPE VI neighborhoods experiencing both greater positive and negative changes than comparable neighborhoods.
In 1997, Medicare’s fee-for-service program covered about 87 percent, or 33 million, of Medicare’s beneficiaries. Physicians, hospitals, and other providers submit claims to Medicare to receive payment for services they have provided to beneficiaries. HCFA administers Medicare’s fee-for-service program through a network of more than 60 claims processing contractors, that is, insurance companies—like Blue Cross and Blue Shield plans, Mutual of Omaha, and CIGNA—that process and pay Medicare claims. In fiscal year 1997, contractors processed about 900 million Medicare claims. Medicare contractors use federal funds to pay health care providers and beneficiaries and are reimbursed for the costs they incur in performing the work. They are also responsible for payment safeguard activities intended to protect Medicare from paying inappropriately. The contractors have broad discretion in conducting these activities, resulting in significant variation among contractors in implementing payment safeguards. HCFA budgets funding for five main types of program safeguard activities carried out by Medicare contractors: (1) medical review, (2) Medicare secondary payer review, (3) audit of provider cost reports, (4) fraud unit investigations, and (5) provider education. Medical review includes automated and manual, prepayment and postpayment reviews of Medicare claims; it is intended to identify claims for noncovered or medically unnecessary services. Medicare secondary payer review focuses on identifying other primary sources of payment, such as employer-sponsored health insurance or third-party liability settlements, for claims submitted to Medicare. The audit process involves auditing cost reports submitted by providers, such as skilled nursing facilities and home health agencies. Contractor fraud units investigate potential cases of fraud or abuse identified through beneficiary complaints, other contractor safeguard units, or other sources. Provider education can include mailings to providers, briefings, and workshops to increase provider awareness of coverage and billing policies as well as coding and documentation requirements. Beginning with fiscal year 1997, HIPAA stipulates that annual funding levels be appropriated from the Medicare Trust Fund to carry out HCFA’s program safeguard activities. This process ensures that HCFA has funding for these important functions. Starting with the $440 million that was available for program safeguard activities in 1997 and the $500 million expected to be used for fiscal year 1998, HIPAA increases funding annually up to a maximum of $720 million in 2003 and following fiscal years. Funding levels provided by HIPAA in the Medicare Integrity Program for fiscal years 1997 through 2003 are summarized in table 1. Before HIPAA was enacted, program safeguard activities were funded out of Medicare’s general contractor program management budget, and the level of funding available for program safeguard activities could be constrained by the need to fund ongoing Medicare program functions—such as processing claims. In fact, while the number of Medicare claims grew by 70 percent between 1989 and 1996, funding for claims review grew less than 11 percent. In 1994, HHS proposed a program safeguard funding arrangement similar to that in HIPAA, saying that it would improve program safeguards by creating “a stable level of funding from year to year so that HCFA and its contractors could plan and manage the function on a multi-year basis.” HHS went on to say that “ast fluctuations in funding have made it difficult to retain experienced staff who understand the complexities of the program.” Appendix II summarizes program safeguard funding for fiscal years 1994 through 1998, by type of program safeguard activity. Although HIPAA provides significant new resources and authorities, the timing of the act—the passage of which occurred only 6 weeks before the start of fiscal year 1997—limited the opportunity for change in the first year. Then, HCFA failed to take advantage of the advance knowledge of fiscal year 1998 program safeguard funding by providing safeguard budgets to its contractors at the beginning of fiscal year 1998. That delay has hindered contractors’ ability to expand their program safeguard activities. However, HCFA has taken steps to direct program safeguard funding to identified weaknesses and program safeguard activities where it is most needed. Notification of fiscal year 1998 program safeguard funding was not given to contractors until January 1998—nearly one-third of the way into the fiscal year. HCFA officials told us that they waited so that HCFA could notify contractors of their program safeguard funding at the same time as claims processing funding. As a result of this late start, it may be difficult for the contractors to complete all of the program safeguard work that HCFA expected them to accomplish with this increased funding. Although HIPAA appropriated program safeguard funding for fiscal year 1998, HCFA officials believed that distribution of that funding needed to be delayed until funds for contractors’ other activities were distributed. They stated that contractors might use Medicare Integrity Program funds for other program management purposes if these program safeguard funds were released in advance of program management funds. Despite HCFA’s concerns, its contractors are required to use Medicare Integrity Program funding only for program safeguard activities. After reviewing a draft of this report, HCFA told us that it would notify contractors of their fiscal year 1999 base program safeguard funding before the first day of the fiscal year. In addition to being late, the January 1998 funding notification to contractors did not include all of the fiscal year 1998 contractors’ funding. As of the end of February, HCFA had not released more than $40 million in program safeguard funding for various projects to be carried out by its contractors. Some of these projects were made possible by the supplemental program safeguard funding provided in November by HHS’ fiscal year 1998 appropriation. The contractors told us that funding received later in the fiscal year is more difficult for them to use effectively because HCFA requires them to complete the projects by the end of September or use a subcontractor. While subcontracting allows the contractor to commit all of its fiscal year funds, contractor officials told us that it does not contribute to building valuable expertise within their own staff. Despite fiscal year 1998 budget increases, neither of the contractors we visited had significantly increased their staff available to perform program safeguard activities, such as provider audit and claims review. While it is difficult to make precise comparisons because of reorganizations at both contractors, contractor officials said that there has been little or no hiring of program safeguard staff, other than some replacements to offset attrition. In some areas, contractors have not filled all of their existing vacancies. Furthermore, contractors’ staffing for some important program safeguard activities is now less than it was before HIPAA. For example, both contractors reported that as of the end of March 1998, they had fewer staff on board to audit provider cost reports than they did in September 1996, before implementation of HIPAA. One contractor currently employs 77 audit staff, down from 88 in September 1996. The other contractor currently employs 151 audit and reimbursement staff, down from 158 in 1996. The latter contractor’s medical review staff has also declined, from 86 in 1996 to 83 at the time of our visit. Because they were uncertain about their level of safeguard funding until well into the year, the contractors also indicated they were not hiring staff to carry out other HCFA-directed projects. In particular, contractors expressed reluctance to hire permanent staff to carry out special projects that are funded only for the current fiscal year. As a result, these projects can affect other program safeguard work. For example, both contractors we visited indicated that the HCFA-directed project to review claims for physician evaluation and management services will require a complex level of review that needs to be done by experienced full-time staff in their medical review units—rather than being carried out by temporary employees or subcontractors—thereby reducing the time that trained and experienced staff are available for the contractors’ ongoing claims review workload. In fiscal year 1998, HCFA has begun to direct program safeguard funding to address weaknesses identified by the HHS OIG’s financial audit of HCFA for fiscal year 1996, and to expand its analysis of how funding can best be allocated among Medicare contractors and program safeguard activities. Although this does not address our concerns about the timeliness of contractor safeguard budgets, HCFA is attempting to better target the safeguard funds. To address the findings of the HHS OIG’s audit of HCFA’s fiscal year 1996 financial statement, HCFA is using fiscal year 1998 program safeguard funding to carry out several corrective actions to supplement regular program safeguard activities, such as the following: To increase the level of claims review, 28 Medicare part B contractors have been directed to conduct a special prepayment review of more than 166,000 physician claims for evaluation and management services. All contractors are to manually perform prepayment reviews of a sample of claims that cleared their automated screens, and each may decide what types of claims it will sample and choose its sampling method. HCFA and its contractors will carry out numerous other targeted efforts and special projects, such as home health agency reviews, the correct coding initiative for part B claims, and numerous information system upgrades. Beginning in fiscal year 1998, HCFA also required Medicare contractors to provide more information and support for their Medicare program safeguard budget requests than it had in prior years, and a HCFA official told us HCFA will continue this practice in fiscal year 1999. Also in fiscal year 1998, HCFA used a new methodology for allocating the program safeguard budget to Medicare contractors. Under this methodology, HCFA incorporates measures of contractor performance (such as return on investment) and program funds at risk in deciding how to allocate Medicare Integrity Program funding to contractors and to specific activities. In addition to providing an assured and increasing source of funding for HCFA’s program safeguard activities, HIPAA directs HHS to contract for program safeguard activities separately from claims processing and payment activities to better ensure the integrity of the Medicare benefit payments. Historically, Medicare program safeguard activities, including such things as medical review of claims, audit of provider cost reports, and investigation of beneficiary complaints, have been conducted by the same contractors who process Medicare claims. HCFA intends that these new competitively awarded contracts will establish program safeguard contractors who specialize in program integrity and have enhanced data analysis capabilities. Although HIPAA did not set a deadline for awarding these contracts, many of the benefits of HIPAA cannot be achieved until the new safeguard contractors are in place. The benefits that can be achieved through these contracts include the following: enabling the review, by a single entity, of all services to a beneficiary by centralizing program safeguard activities now divided among several types of contracts: carriers, intermediaries, durable medical equipment regional carriers, and regional home health intermediaries; eliminating the competing interests of timely payment of claims and achieving better price and contractor performance through competition; reducing the number of program safeguard units from the current level of more than 60 to simplify oversight, achieve more consistent contractor performance, and achieve economies of scale; and allowing HCFA to more aggressively mitigate conflicts of interest arising when contractors enter into new health care lines of business. Despite its new authority to use program safeguard specialists, HCFA does not plan to make any major changes in who conducts program safeguard operations in the foreseeable future. HCFA plans to contract with one program safeguard specialist by January 1999. However, this contract will be very limited in scope and will not provide many of the important benefits envisioned for such a contractor. It will also not reduce HCFA’s reliance on its current contractors for program safeguard activities. While many important decisions must still be made before HCFA can award its first competitive program safeguards contract, the decision has been made to significantly limit its scope. The scope will be limited geographically, possibly to a single state. Initially, the first contract will not cover all of the tasks in HCFA’s statement of work. The contract may also be limited to program safeguard functions on the claims processed by a single part A or part B contractor. This limited-scope contract will not provide the opportunity to review all services billed for a single beneficiary, nor will it reduce the number of safeguard units that HCFA must oversee. HCFA officials do not know when the scope of the first contract might be expanded or when additional specialist contracts might be awarded. In preparing for its first new contract, HCFA published proposed rules governing the procurement and bidders’ conflicts of interest as well as a draft statement of work. HCFA officials told us that they hope to award the first program safeguards contract by January 1999. However, as of April 1998, HCFA had not determined the terms of the first safeguard specialist contract, including the type of contract to be awarded, the types of services covered, the geographic jurisdiction, the program safeguard activities to be included, or the method of evaluating and reimbursing the contractor. Many of Medicare’s vulnerabilities are inherent in its size and mission, making it a perpetually attractive target for exploitation. HCFA must effectively use the funding and authorities provided by HIPAA if it is to substantially reduce future losses. Although it requested and received an assured funding level for program safeguards from the Congress, HCFA has not administered such funding provided by HIPAA in a way that provides its contractors with increased funding stability. As a result, contractors have delayed their efforts to recruit and train staff, and the benefits anticipated from HIPAA’s guaranteed program safeguard funding are being delayed. If HCFA notifies contractors of their base program safeguard funding for fiscal year 1999 before the first day of the fiscal year, as it now plans to do, these problems should be avoided in the future. HCFA’s current plans for issuing a contract for a program safeguard specialist may not provide many of the important benefits anticipated when HIPAA gave HCFA this contracting authority. Without a concerted effort to fully implement comprehensive program safeguard specialist contracts, the benefits of the authority provided by HIPAA will be delayed. We recommend that the Administrator of HCFA take advantage of the assured program safeguard funding provided through HIPAA by initiating planning efforts to give its contractors more timely notification of the program safeguard activities they are expected to perform and the funding they have available to carry out these activities. We provided a draft of this report to the HCFA Administrator for review and comment. HCFA agreed that it should distribute program safeguard funding to contractors as early in the fiscal year as is possible and said that contractors will be notified of the allocation of base program safeguard funding for fiscal year 1999 before the first day of the fiscal year. Once they are carried out, these actions planned by HCFA should address the concerns we raise in this report. HCFA also stated that its incremental approach to implementing contracts with program safeguard specialists is intended to mitigate risk and is consistent with our past recommendations on the implementation of major HCFA projects. While mitigating the risks of a major project such as this is clearly necessary, it is also important to ensure that the benefits of the project are obtained as expeditiously as possible. In this case, until the first contract is expanded or others are awarded, many of the important benefits anticipated from these contracts will not be realized. HCFA also provided technical comments, which we incorporated where appropriate. HCFA’s comments appear in appendix III. As agreed with your offices, we are sending copies of this report to the Secretary of HHS, the Administrator of HCFA, and other interested parties. We will also make copies available to others upon request. Please call me at (202) 512-7114 or Paul Alcocer at (312) 220-7709 if you or your staff have any questions about this report. Other major contributors include Adrienne S. Friedman, Donald J. Kittler, and Barbara A. Mulliken. To determine what additional resources and authorities the Congress provided to the Health Care Financing Administration (HCFA) through the Medicare Integrity Program, we reviewed the Health Insurance Portability and Accountability Act of 1996 (HIPAA). We also obtained and reviewed fiscal year 1997 budget and expenditure data and fiscal year 1998 budget data for the Medicare Integrity Program, as well as expenditure data for Medicare program safeguard activities for fiscal years 1994 through 1996. Because the activities directed by HIPAA in the Medicare Integrity Program relate to the fee-for-service portion of Medicare, we did not review HCFA’s program integrity efforts related to Medicare managed care plans. To determine how HCFA has used these resources and authorities to improve the protection of Medicare funds, we reviewed HCFA data on the distribution of funding. We also visited HCFA headquarters and regional offices and two Medicare contractors to discuss how Medicare Integrity Program funding was being used. We reviewed documentation obtained from HCFA and the two contractors, including HCFA’s fiscal year budget and performance requirements; the contractors’ budget requests; and documentation addressing contractor program safeguard staffing, efforts, and results. We also obtained information on the current status of HCFA’s efforts to use its new contracting authority. To determine how HCFA plans to use these authorities and resources in the future, we reviewed relevant documentation, including the draft statement of work for program safeguard contracts, HCFA’s Government Performance and Results Act of 1993 performance plan, and HCFA’s annual work plan. We also discussed these issues with HCFA officials. We conducted our work at HCFA headquarters in Baltimore, Maryland; HCFA region V offices; HCFA region VI offices; Adminastar Federal Inc.; and Blue Cross and Blue Shield of Texas, Inc. We performed our work between February and May 1998 in accordance with generally accepted government auditing standards. The first year of Medicare Integrity Program funding under HIPAA did not result in an increase in funding over the prior year. In fact, the $437.9 million of Medicare Integrity Program funds spent in fiscal year 1997 was actually about 1 percent less than the $441.1 million spent in fiscal year 1996—the last year before HIPAA was passed. This occurred because in 1996, HCFA’s program safeguard spending benefited from transfers of funds from claims processing operations. A breakdown of program safeguard spending in fiscal years 1994 through 1997 and the budget for fiscal year 1998 are shown in table II.1. 1997 (1st year of program) 1998 (budgeted) Provider education was funded entirely with program management funds before HIPAA. It is now supported by both program management and program safeguard funds. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed: (1) the Health Care Financing Administration's (HCFA) progress in implementing the Medicare Integrity Program; (2) what additional resources and authorities Congress provided to HCFA through the Medicare Integrity Program; (3) how HCFA has made use of these resources and authorities to improve the protection of Medicare funds; and (4) how HCFA plans to use these authorities and resources in the future. GAO noted that: (1) the Health Insurance Portability and Accountability Act of 1996 (HIPAA) established the Medicare Integrity Program to subsume the program safeguard activities of HCFA and its current claims processing contractors; (2) rather than fund safeguard activities as part of HCFA's annual administrative budget appropriation, HIPAA appropriates safeguard funding for each year beginning in fiscal year (FY) 1997; (3) the Department of Health and Human Services (HHS) proposed this type of funding arrangement in 1994 so that HCFA and its contractors could better plan and manage program safeguard efforts; (4) the Medicare Integrity Program also provides HCFA the authority to contract with specialists in program safeguards, to separate these functions from current claims processing and payment contracts; (5) the new contracts with program safeguard specialists are intended to make important improvements in HCFA's program safeguard efforts; (6) these improvements will make it possible to review all of the claims for a single beneficiary in one place, reduce the number of contractor safeguard units to increase consistency and simplify HCFA's oversight, and better manage the conflicts of interest that develop when Medicare contractors expand into new health care businesses; (7) for FY 1998, HIPAA significantly increased program safeguard funding over the FY 1997 level; (8) although this funding increase for 1998 was assured when HIPAA became law, HCFA did not notify contractors of their funding until one-third of FY 1998 was past; (9) contractors reported that, because of this delayed notification, they delayed plans to increase their program safeguard staff; (10) HCFA is progressing slowly in contracting with safeguard specialists; (11) the first contract, to be awarded by January 1999, will be limited in scope, covering only part of the work envisioned for program safeguard contracts; (12) this first contract will therefore not provide many of the benefits ultimately expected, nor will it reduce HCFA's reliance on its current contractors for program safeguards; and (13) HCFA has no firm plans regarding when it will expand the scope of this contract or award a second safeguard specialist contract.
We conducted this performance audit from February 2008 through November 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Biosurveillance is the process of gathering, analyzing, and interpreting data in order to achieve early detection and warning and overall situational awareness of biological events with the potential to have catastrophic human and economic consequences. In August 2007, the 9/11 Commission Act established NBIC to contribute to the nation’s biosurveillance capability by enhancing the ability of the federal government to rapidly identify, characterize, localize, and track biological events of national concern through integration and analysis of data relating to human health, animal, plant, food, and environmental monitoring systems (both national and international). Once a potential event is detected, NBIC is to disseminate alerts to enable response to a biological event of national concern. To achieve these objectives, NBIC is to coordinate with federal and other stakeholders that have information that can be used to enhance the safety and security of the United States against potential biological events of national significance. This community of federal stakeholders is known as the NBIS. The NBIS community predated the enactment of the 9/11 Commission Act. Beginning in 2004, DHS managed the NBIS and developed an IT system to manage other agencies’ biosurveillance information, an effort that was moved among several DHS Directorates, including DHS’s Science and Technology Directorate and the Preparedness Directorate. In 2007, DHS created the Office of Health Affairs, headed by the DHS Chief Medical Officer, to lead DHS’s biodefense activities and provide timely incident- specific management guidance for the medical consequences of disasters. At that time, DHS placed NBIS in the Office of Health Affairs. Shortly after that, the 9/11 Commission Act created NBIC and gave it responsibility for managing the NBIS, which has remained in the Office of Health Affairs. Since fiscal year 2008, NBIC has operated with an annual budget of $8 million dollars. Biosurveillance activities at NBIC are carried out by its Operations Division, which is headed by the Deputy Director and Chief Scientist and supported by 10 contract employees that serve as the analytic core for NBIC’s daily operations. These staff members have various backgrounds related to biodefense, including public health, veterinary, environmental, and intelligence training. As shown in table 1, the 9/11 Commission Act outlines certain requirements for NBIC and NBIS member agencies, and most of these relate to how NBIC is to coordinate NBIS member agency data and information management resources. Generally, there are four elements that are critical for NBIC to achieve its early detection and situational awareness missions established in the 9/11 Commission Act: (1) acquire data from NBIS partners that can be analyzed for indications of new or ongoing biological events, (2) leverage scientific and event-specific expertise from across the NBIS, (3) obtain strategic and operational guidance from NBIS partners, and (4) develop and maintain information technologies to support data collection, analysis, and communication. Although the act does not specify any member agency that must participate in the NBIS, it defines a member agency as any agency that signifies agreement to participate by signing a memorandum of understanding (MOU) and establishes for them specific requirements— generally related to sharing information and human assets. For example, as shown in table 1, the act provides that each member agency shall use its best efforts to integrate biosurveillance information into the NBIC, with the goal of promoting information sharing between federal, state, local, and tribal governments to detect biological events of national concern. NBIC has identified 11 NBIS partners at the federal level—the Departments of Health and Human Services (HHS), Agriculture (USDA), Commerce, Defense, Interior, Justice, State, Transportation, and Veterans Affairs, as well as the Environmental Protection Agency and the United States Postal Service. In some departments, more than one component has been identified for participation. Some of these departments, such as HHS and USDA, have major mission responsibilities for collecting health data that may indicate an outbreak of a disease or other biological event. Other departments may collect data or have subject matter expertise that may be used during the course of a biological event. For example, the National Oceanic and Atmospheric Agency within the Department of Commerce collects meteorological data that may be used by NBIC to help inform the progression of an outbreak based on weather patterns. Around the same time as the enactment of the 9/11 Commission Act, the President issued Homeland Security Presidential Directive-21 (HSPD-21), as a high-level biodefense strategy. HSPD-21 is built on the principles of earlier directives—HSPD-9 and HSPD-10—which collectively describe the role of the federal government in building a national capability to detect a biological event. For example, HSPD-21 lays out goals for addressing each of four biodefense elements for human health, one of which is surveillance. In this respect, HSPD-21 calls for the United States to develop a nationwide, robust, and integrated biosurveillance capability to provide early warning and ongoing characterization of disease outbreaks in near real-time. Consistent with this goal, HSPD-21 directs the Secretary of Health and Human Services to establish a national epidemiologic surveillance system for human health, in part, to integrate federal, state, and local data into a national biosurveillance common operating picture. Although HSPD-21 does not specify a role for DHS in biosurveillance, the earlier directives did, and creation and maintenance of an electronic biosurveillance common operating picture has been an NBIS goal since its inception. The data needed to detect an infectious disease outbreak or bioterrorism may come from a variety of sources, and aggregating and integrating data across multiple sources is intended to help recognize the nature of a disease event or understand its scope. Combining and comparing data streams from different sectors to detect or interpret indications of a potential health emergency is called biosurveillance integration. Both HSPD-21 and the 9/11 Commission Act seek enhanced integration of disparate systems and programs that collect data with the aim of providing early warning and ongoing characterization of biological events. HSPD-21 and the 9/11 Commission Act each also seek to enhance the situational awareness for the detection of and response to biological events. Much of the information gathered for these biosurveillance purposes is generated at the state government level. For example, state health departments collect and analyze data on notifiable diseases submitted by health care providers and others. In addition, state-run laboratories conduct testing of samples for clinical diagnosis and participate in special clinical or epidemiologic studies. Finally, state public health departments verify cases of notifiable diseases, monitor disease incidence, and identify possible outbreaks within their states. At the federal level, agencies and departments generally collect and analyze surveillance data gathered from the states and from international sources, although some federal agencies and departments also support their own national surveillance systems and laboratory networks. When an issue crosses federal agency lines, as biosurveillance integration does, the agencies involved must collaborate to deliver results more efficiently and effectively. Due to NBIC’s role as an integrator of information across the biosurveillance community, it is important for NBIC to ensure that it effectively collaborates with the NBIS to obtain the cooperation of this interagency community. One reason that it is important that NBIC effectively collaborate with federal partners is that agencies are not required by law to support NBIC or participate in the NBIS community. We have previously reported that for collaborating agencies to enhance and sustain collaboration, they need to, among other things, (1) have a clear and compelling rationale for working together; (2) establish joint strategies, policies, and procedures for aligning core processes and resources; (3) identify resources needed to initiate or sustain their collaborative effort; (4) work together to define and agree on their respective roles and responsibilities; and (5) develop accountability mechanisms to help implement and monitor their efforts to achieve collaborative results. NBIC has made some efforts to put mission-critical elements in place, such as requesting data from other federal partners, initiating relationship- building activities among NBIC analysts and subject matter experts at other agencies, and establishing governance bodies to oversee and guide the NBIS. However, NBIC currently relies on publicly available data because it receives limited data from NBIS partners and generally lacks assignments of personnel from other agencies to leverage analytical expertise from across the NBIS partners. NBIC’s ability to acquire and consolidate data from NBIS partners as well as from nonfederal sources is central to achieving its mission. Current and initial drafts of the NBIS Concept of Operations reinforce this notion, noting that the identification of relevant and timely data sources, which act in combination to provide actionable information for decisionmaking, is essential to accomplishing early detection. NBIC has taken some action to acquire these types of data from NBIS partners, for instance, by requesting that NBIS partners identify the types of data they collect or generate that might aid in NBIC’s early detection mission. However, as of October 2009, NBIC was generally not receiving the types of data best suited to early detection of biological events of national concern. NBIC officials acknowledge that they lack key data, and NBIC and other NBIS member officials described numerous challenges to sharing such information, including but not limited to scant availability of such data throughout the federal government and concerns about trust and control over sensitive information before it is vetted and verified. Based on our discussions with NBIS agency officials and review of NBIC documents, we have defined and verified with NBIC officials three categories of electronic data that are critical for NBIC to achieve its mission and might be available from federal agencies or other sources. As described in table 2, these data categories are (1) raw structured data, (2) raw unstructured data, and (3) final products which are typically briefings produced by other agencies in the course of monitoring routine and emerging disease. As of October 2009, NBIC was receiving some final products from NBIS partners, but was not receiving any raw data— particularly data that are generated at the earliest stages of a biological event. According to officials, receipt of all three types of electronic data is important to help NBIC achieve its mission of detecting and warning of a biological event because detection of events that are novel, from multiple sources, or widespread requires analysis of multiple independent data streams. However, the officials told us that they do not receive from NBIS partners the raw structured or unstructured data that best support the early detection goal of biosurveillance. In particular, NBIC identified data that are generated at the earliest stages of a biological event—which can include raw data collected by federal agencies as part of their biosurveillance responsibilities—as being among the highest value for enabling the earlier detection of biological events of national concern. For instance, structured data, such as medical codes corresponding to diagnoses that are entered into databases, as well as some sources of unstructured data, such as written observations noted on medical forms, are generated at the earliest stages of a biological event and have been identified by NBIC as a high priority for early detection. These data can be collected or generated by federal agencies with responsibilities for biosurveillance and which are participating in NBIS. For example, HHS has developed a surveillance system that collects data on symptoms of patients entering emergency departments, that when analyzed with statistical tools, may be able to indicate the presence of an outbreak in less time than it takes to perform diagnostic lab tests. NBIC seeks to finalize three types of agreements with NBIS partners to articulate and establish protocols and legal authority for resource sharing: (1) MOUs, (2) interagency agreements (IAA), and (3) interagency security agreements (ISA). To date, 7 of the 11 agencies have signed MOUs, but only 1 has a finalized ISA in place for data sharing, according to NBIC officials. As of October 2009, the federal agency that signed an ISA agreed to provide a single data source related to food safety. NBIC officials told us that although the agreement and the technology allowing the electronic data exchange are in place, the agency has not yet begun transferring the data to NBIC, and they did not know when to expect the transfer to begin. NBIC’s inability to finalize agreements can be attributed in part to challenges it faces in ensuring effective collaboration, which will be discussed later in this report. Five NBIS partners provide NBIC with written final products, such as briefings produced on a routine basis that provide information on outbreaks of diseases or special alerts of potentially dangerous biological events issued as needed. However, NBIC officials noted that there are limitations on the value of final reports for supporting early detection. These finished products represent the agency’s final analysis and interpretation of the raw data that it collects and have been reviewed and approved by the agency leadership for general dissemination to interested parties. According to NBIC officials, these products are generally useful for providing context but not for early detection of a biological event because they are not generated in a timely enough fashion to be valuable for detecting new biological events and focus on biological events that have already been detected. In the absence of proprietary information from NBIS partners, NBIC relies on mostly nonfederal sources of data, such as media reports of illness, to attempt to identify biological events. The bulk of data—according to NBIC officials more than 98 percent—NBIC currently uses to pursue its mission is unstructured and comes from nonfederal, open sources, including an international information gathering service called Global Argus, a federally-funded program in partnership with Georgetown University. The service searches and filters over 13,000 overseas media sources, in more than 34 languages. The practice of monitoring and translating local news articles has the potential to provide information about undiagnosed and other suspicious disease activity before it is reported through more official channels. NBIC officials stated that continuous monitoring of global news media sources and publicly available Web sites would be important to round out potential gaps in coverage, even if other data are available from federal agencies. NBIC officials told us that regardless of the quantity and quality of data types shared by collaborating agencies, effective biosurveillance depends on human analysts to interpret events and place them in context. For example, determining whether an outbreak of a new emerging infectious disease has occurred and further assessing whether this event is one of national concern are analytic judgments that require not only data but also the expertise of an experienced, knowledgeable analyst. According to these officials, analyst-to-analyst communication in a trusted environment is absolutely essential for rapid vetting, verification, and contextualization of events. The 9/11 Commission Act calls for member agencies to provide personnel to NBIC under an interagency personnel agreement and consider the qualifications of such personnel necessary to provide human, animal, and environmental data analysis and interpretation support. However, for the most part, NBIC has not consistently received this kind of support from NBIS partners. Personnel detailed (that is, personnel employed by a federal agency and temporarily assigned to NBIC for a specified period of time) from other federal agencies enable analysis and interpretation of data by serving as subject matter experts for specific issues that are part of their home agencies’ missions and as conduits of information from their respective home agencies. NBIC has signed MOUs with seven agencies, but only two have provided a personnel detail to the NBIC headquarters in Washington, D.C., and as of October 2009, only one of those personnel details was active, because one of those agencies did not replace personnel after the initial detail ended. NBIC officials told us that daily interaction with officials who had been on detail at NBIC not only enhanced their ability to interpret the information immediately on hand but also contributed to ongoing contextual learning for NBIC’s analytical corps. Although most of the NBIS partners have not detailed their subject matter experts to NBIC, the integration center officials have used other means to obtain expertise and information from other agency analysts. NBIC officials told us that they have co-located the NBIC analysts at other collaborating agencies where they spend up to 2 weeks working with analysts from these other agencies both to learn more about their operations and to help forge ongoing relationships. NBIC officials stated they have also established a daily process to engage the NBIS in sharing information and analytic insights with each other. During this process— which NBIC calls the daily production process—NBIC analysts compile information on reports of outbreaks that may be of concern, and then this information is disseminated to the NBIS community for discussion at a daily teleconference. The participants in the teleconference determine whether the events merit further monitoring or evaluation and share any relevant information they may have about the event. NBIC analysts then use the information gathered, as refined by the daily teleconference, to finalize NBIC daily reports and update its electronic Biosurveillance Common Operating Picture, which is a manually updated electronic picture of current worldwide biological events being tracked. For example, NBIC analysts might identify local news reports that suggest food contamination in a region. During the daily conference call, one or more of the agencies with responsibility for monitoring food safety or foodborne illness might contribute more information, such as a history of similar issues in the same geographical region, that gives more context to the reports. Then, collectively, the responsible agencies might decide that the event, first uncovered in open source media, warrants further investigation and monitoring. NBIC analysts would then post all known information to its electronic Biosurveillance Common Operating Picture for all interested parties to follow. Meanwhile, the agencies with missions of jurisdiction would conduct their investigations and report any new findings during the following day’s teleconference. NBIC officials told us that this process requires a wide range of expertise from across the agencies. These officials said that they may also communicate directly with an agency prior to the daily teleconference if NBIC plans to discuss an item relevant to the agency’s mission at the meeting. Another means NBIC uses to obtain expertise and information from other agency analysts is through participation in the Biosurveillance Indications and Warnings Analytic Community (BIWAC). The BIWAC is a self- governing interagency body composed of federal officials who are actively responsible for pursuing a biosurveillance mission. The agencies represented include: the Department of Defense, HHS’s Centers for Disease Control and Prevention, USDA, DHS, and the intelligence community. The mission of the BIWAC is to provide a secure, interagency forum for the collaborative exchange of critical information regarding biological events that may threaten U.S. national interests. On behalf of the BIWAC, the Department of Defense’s National Center for Medical Intelligence hosts an encrypted information sharing portal called Wildfire. According to NBIC’s Chief Scientist and Deputy Director, in addition to engaging in the information exchange through Wildfire, she is an active supporter and participant in BIWAC meetings and teleconferences. According to NBIC officials, although these efforts to obtain the analytical insights of subject matter experts from collaborating agencies may be valuable, they do not provide a substitute for personnel details to the integration center itself. For example, with the daily teleconference, NBIC may have limited access to NBIS agency subject matter experts because analysts from only a few of the various agencies may be available for immediate communication on any given day, and not all agencies regularly participate in the daily teleconference. In addition, apart from the daily teleconference, NBIC officials said that agencies may limit NBIC’s ability to communicate with their subject matter experts, particularly in the early stages of responding to a biological event when the agency is prioritizing its response needs. Finally, NBIC analysts may also communicate through federal agencies’ operations centers during the course of an ongoing biological event, but NBIC officials noted that this channel of communication is not always an effective means to get meaningful input from agencies’ subject matter experts. The lack of sustained personnel detailed to NBIC from other NBIS partner agencies can be attributed, in part, to challenges it faces with ensuring effective interagency collaboration, which will be discussed later in this report. In order to support the ability for NBIS partners to engage in overseeing and guiding the NBIS, NBIC has established and administers two governance bodies. NBIC sponsors meetings of the two groups on a regular basis. The NBIS Interagency Oversight Council (NIOC) is composed of representatives at the assistant secretary level from each NBIS agency. The NIOC is to act as the senior oversight body to provide guidance and direction for the operation, implementation, and maintenance of the NBIS, as well as to resolve interagency or intradepartmental issues that cannot be resolved at lower levels. The NBIS Interagency Working Group (NIWG) is a senior, director-level working body created to share information on NBIC activities, such as the status of developing draft documents and standard operating procedures including procedures undertaken during ongoing biological events of national concern. The NIWG can also establish sub-working groups to conduct specific work as necessary to provide support to the NBIC and the NIOC. For example, NIWG established a sub-working group to propose procedures for resolving conflict during the daily production cycle. One of the elements that is critical for NBIC to carry out its mission is development and maintenance of information technologies to support data collection, analysis, and communication of alerts. The 9/11 Commission Act also specifically mentions the need for statistical tools to analyze data to identify and characterize trends of biological events of national concern. NBIC has taken steps to develop an IT system that can manage data from NBIS partners and can help identify open source reports of potential biological events, but NBIC largely lacks data from federal agencies. Given this condition, rather than a system designed to electronically process structured data received directly from NBIS partners, NBIC has configured its IT system—the Biosurveillance Common Operating Network (BCON)—primarily to identify and assemble unstructured data from public sources on the Internet that it will later vet with other NBIS analysts in the daily production process. Therefore, NBIC relies on the NBIS community and member agency subject matter experts for analysis and interpretation of publicly available data rather than providing the NBIS community with an analysis of integrated, raw, structured data from the NBIS partners. According to NBIC officials, they anticipate using BCON to manage any agency data streams that they may eventually acquire. BCON is a system of systems that is built on multiple commercial-off-the- shelf software packages. Currently, the central feature of BCON is its use of a set of keywords within a language algorithm to search the Internet for media articles that may contain biosurveillance-relevant information and compile them for NBIC analysts to review. As part of this function, BCON also flags events for immediate analyst attention. Additionally, the information from BCON is the basis for the NBIC Biosurveillance Common Operating Picture, which is a manually updated Google Maps application of current worldwide biological events being tracked. NBIS agency officials can view the Biosurveillance Common Operating Picture on the Homeland Security Information Network. According to NBIC officials, in the future NBIS agency officials will also have the ability to create and update event information. Although NBIC generally lacks direct-feed, raw, structured data from NBIS partners to apply statistical and analytical tools, according to our observations and review of documents supporting the development of the system, BCON is designed to locate and log information associated with the events contained in the open source media that it searches. This information includes the geographic coordinates and the date and time of occurrence for each event. This data is archived and, according to NBIC officials, can be used to conduct cross-domain analysis for trends, historical context, associated events, anomaly detection, and hypothesis generation. Among the applications planned for inclusion in BCON is a tool that is designed to perform historical analysis of this archived data to help monitor and refine the effectiveness of the algorithm. According to NBIC officials, the goal of this analysis is to help ensure that NBIC analysts will be able to identify events that merit attention by refining the algorithm to limit results that are less relevant for monitoring for biological events of national concern. However, these officials told us that this aspect of BCON has been put on hold due to budget constraints. To advance information sharing among federal agencies, NBIC is also pursuing $90 million dollars in supplemental funding for a broader information sharing initiative. This initiative is intended to enable greater information sharing capabilities among federal, state, and local agencies and to have the necessary data security to house classified data. According to NBIC officials, this initiative is being led by the National Security Council. To communicate alerts to member agencies and the larger NBIS community regarding any incident that could develop into a biological event of national concern, NBIC has developed an IT system to provide alerts and warnings, based on an existing system that had been developed for another DHS component. However, according to NBIC officials, the system has not yet been fully implemented because they recently acquired it, and NBIC is still testing protocols for using it. According to our observations of the system and review of operational protocols, the system provides NBIC with the capability to tailor alerts and warnings to specific recipients via distribution lists. These officials said that in spring 2008 the protocols were approved by the NIWG and briefed to the NIOC. NBIC officials said they are currently testing the protocols but have not yet needed to employ the system during a biological event. Our analysis and interviews with NBIS partners suggest that NBIC could strengthen its use of collaborative practices. Because participation in the NBIS is voluntary, effective use of collaborative practices is essential to NBIC’s ability to successfully develop and oversee the NBIS in a way that enhances federal biosurveillance capabilities. However, we found (1) widespread uncertainty and skepticism around the value of participating in the NBIS and the purpose of NBIC; (2) incomplete joint strategies, policies, and procedures for operating across agency boundaries; (3) an inability or unwillingness of NBIS members to respond to plans for leveraging resources; (4) confusion and dissatisfaction around the definitions of mission, roles, and responsibilities of NBIC and its NBIS partners; and (5) a lack of mechanisms to monitor and account for collaborative results. Biosurveillance integration is an inherently interagency enterprise, requiring expertise and resources from various federal agencies, such as information on human and zoonotic diseases monitored by HHS and USDA. Indeed, NBIC officials acknowledged that NBIC cannot provide national-level capability for cross-domain biosurveillance relying solely on DHS resources. As a result, it is crucial for NBIC to ensure stakeholder buy-in and participation in clearly defining the value of NBIS participation and NBIC’s mission or purpose, as well as establishing the strategies and procedures for how the partners will work together. Our prior work states that effective collaboration requires agencies to have a clear and compelling rationale for working together, which can be achieved by defining and articulating a common federal outcome or purpose. The rationale can be imposed externally through legislation or other directives or can come from the agencies’ own perceptions of the value of working together. In either case, agency staff can accomplish this by working across agency lines to define and articulate the common purpose they are seeking to achieve that is consistent with their respective agency goals and mission. Because there is no legal requirement for agencies to participate in NBIS, agencies must have a clear and compelling rationale to work together as a community of federal partners by joining the NBIS and providing data and personnel to the integration center. In the case of an agency like NBIC, for which collaboration is essential, clearly defining and communicating its purpose and mission can help to ensure that partners share a vision of the desired outcomes. In addition, our work has shown that to enhance and sustain collaboration, it is important to establish joint strategies, policies, and procedures for operating across agency boundaries. Establishing joint strategies and compatible policies and procedures helps align collaborating agencies’ activities, processes, and resources to, among other things, bring together diverse organizational cultures to enable a cohesive working relationship across agency boundaries and create the mutual trust required to sustain the collaborative effort. However, we found in interviews with agency officials from 14 components of the 11 NBIS partners, widespread uncertainty and skepticism around the value of and rationale for participation in the NBIS and incomplete strategies, policies, and procedures for operating across agency boundaries that lack key stakeholder buy-in. Twelve of the 14 NBIS-partner components expressed uncertainty about the value of participating in the NBIS community or confusion about the purpose of NBIC. For example, officials from one component stated that they were uncertain whether sharing resources with the integration center, something that is required of members of the NBIS community, would further their agency’s missions. Officials from another component expressed concerns about the rationale for participating in the NBIS and supporting the integration center, stating they were unsure whether NBIC contributed anything to the federal biosurveillance community that other agencies were not already accomplishing in the course of carrying out their biosurveillance-relevant missions. Officials from five of these components noted that their uncertainty about the value of participation in the NBIS was a factor in not assigning personnel to NBIC. Further, officials from 7 of the 14 components we interviewed indicated that their experience with a recent tabletop exercise and real life events had contributed to their concerns about the value of participating in NBIS and the purpose of NBIC. For example, officials from one component said that the tabletop exercise showcased agencies’ reluctance to share information and underscored that there was no role for NBIC; while officials from another component said that during 2009 H1N1 activities, NBIC was not able to demonstrate that it had unique value to add. Officials from seven of the components indicated that they lacked a concrete understanding of the purpose for which NBIC was requesting their agencies’ data, which was, in part, the reason they had not been able to identify appropriate data sources or to work out data sharing agreements with NBIC. NBIC officials told us that they regularly reminded NBIS partners of NBIC’s mission as the coordinator of the NBIS and the value of sharing data and personnel to achieve the goal of earlier detection and enhanced situational awareness. However, officials from 8 of the 14 components told us that during negotiations with NBIC, they had raised concerns about the purpose of the data or the value of detailing personnel to NBIC, and NBIC had not followed up in a timely and consistent manner to resolve those concerns. NBIC officials also stated that they have taken actions to demonstrate the value of participating in NBIS and of sharing resources with the integration center. For example, NBIC co-located the integration center’s analysts with analysts at other agencies, such as the Centers for Disease Control and Prevention, for brief periods of time to enhance mutual understanding between NBIC and NBIS partner agencies. Further, NBIC officials have attempted to demonstrate the value of participating in NBIS and supporting the integration center by encouraging agencies to participate in NBIC’s daily production process. NBIC officials said that through daily engagement in the production process and during recent real life events like food borne illness outbreaks they have been able to demonstrate the value of NBIC. However, agency officials told us that their experiences with NBIC during real life events and the tabletop exercise created questions about the value of participating in the NBIS and NBIC’s purpose. NBIC officials have drafted but not completed a strategic plan for NBIC that includes a mission statement, which could help clarify NBIC’s purpose. The plan is also to provide strategic and operational guidance to NBIC officials for achieving that mission. According to NBIC officials, however, they have not shared the draft strategic plan with NBIS officials or solicited their input, and it is not currently their plan to do so because it is an internal document. Officials have not set a deadline for completing the NBIC strategic plan because they are still in the process of vetting the initial draft internally. In addition to uncertainty about the value of participating in the NBIS and the purpose of NBIC, we also found that NBIC has not completed and achieved buy-in for joint strategies, policies, and procedures for operating across agency boundaries. NBIC has drafted a Concept of Operations, which is intended to communicate joint strategies, policies, and procedures for operating across the NBIS. According to NBIC officials, they have solicited and considered comments from NBIS partners as they developed the draft, which is currently on its third version. However, NBIC has not yet achieved agreement around strategies, policies, and procedures that would support effective collaboration across the NBIS. For example, one key partner agency—one for which biosurveillance is a mission critical function and is thus essential to a strong and effective NBIS—shared with us a memo they had written to NBIC expressing their lack of concurrence with the current Concept of Operations. The memo cited several concerns that related largely to lack of clarity in the document about the desired common federal outcome and the role of the different partners in achieving it. NBIC officials told us they plan to finalize the Concept of Operations by the end of 2009. Clearly defining its mission, as well as articulating the value of participation in the NBIS, could help NBIC overcome challenges convincing agencies to work collectively as part of the NBIS. In addition, establishing and clarifying joint strategies, policies, and procedures with buy-in across the NBIS, could help address barriers to collaboration. Two of the collaborative practices we recommend speak to how agencies will share human and other assets to achieve the desired outcomes— identifying and addressing needs by leveraging resources and agreeing on roles and responsibilities. According to NBIC officials, the concept of a national center for integrating biosurveillance data from multiple agencies depends on the willingness of the collaborating agencies to detail their experts to the center for a period of time to interpret the data for signs of an outbreak or biological attack; consequently, effectively identifying what resources are available and how to leverage them is important. In our work on practices to enhance and sustain collaboration, we call for agencies to assess relative strengths and weaknesses to identify opportunities to leverage each other’s resources, thus obtaining additional benefits that would not be available were the agencies working separately. However, agency officials we met with stated that NBIC did not recognize the different levels of resources and capacities that each agency brought to this effort. Seven of the 14 groups of agency officials we interviewed noted that the NBIC made personnel requests that were not compatible with the resources agencies had available. For example, one of the comments officials made to us regarding NBIC’s request for personnel details was that they did not have available or could not spare personnel that matched NBIC’s request for senior-level officials with sufficient analytical knowledge and authority to make immediate decisions about sharing information across the NBIS. Officials from one of the components without a direct biosurveillance mission told us that they only have one such person on staff and needed to keep that person in house to be able to carry out their mission-critical activities. Officials at two agencies described methods they had devised for human-resource sharing arrangements that did not involve locating senior staff at NBIC for several months. However, NBIC officials told us that this is no substitute for the value of a member agency personnel detail that is physically located at NBIC. NBIC officials noted that the Secretary of Homeland Security had sent a memo to other NBIS agency leadership requesting help in securing personnel details on May 23, 2008. In addition, they stated that the issue is regularly addressed in NIWG and NIOC meetings. The officials also provided several examples of outreach to NBIS officials at all 11 agencies, such as through discussions with NBIS partner agency representatives at NIWG meetings. Similarly, 5 of 14 groups of officials we interviewed reported that they had experienced confusion about how NBIC planned to use personnel details if they were provided. For example, one such agency expressing this confusion said that NBIC’s guidance on what it is looking for in a personnel detail had changed frequently. NBIC officials told us that initially they requested individuals with strong scientific backgrounds to assist with data analysis and interpretation (analyst model). However, they later determined that they could use senior-level agency officials who were knowledgeable about their home organization to act as liaisons by identifying specific subject matter experts to consult with NBIC, as needed (liaison model). According to these NBIC officials, they have communicated to the NBIS partners that if they detail personnel to NBIC, they can follow either the analyst model or the liaison model. Nevertheless, during our interviews a lack of clarity about personnel detail roles and responsibilities was among the reasons cited for not finalizing MOUs or interagency agreements for personnel details. Of the two NBIS partners that placed personnel at NBIC, officials from one agency told us that although they still were not entirely clear on NBIC’s needs, they were committed to the NBIS concept. Therefore, they committed to send two half-time detailees each fitting one of the two types of detailees NBIC had alternately requested. These personnel details were ongoing as of October 2009. According to agency officials, they committed to a shorter detail than NBIC requested because they intend to use the current detail placement to help clarify for themselves what NBIC’s needs are and the extent to which the detail arrangement might be valuable to their agency. However, officials at the only other agency that had detailed personnel to NBIC told us that they had not renewed the detail agreement when it ended, in part because of budgetary challenges, but also because of a general perception at their agency that the detail had not been particularly valuable for the individual or for their agency. According to NBIC officials, the personnel details from this agency assisted NBIC immeasurably in both the analysis work and in thinking through how to grow and shape the personnel detail program. We also discuss in our work on practices for enhancing and sustaining collaboration the importance of defining and agreeing on roles and responsibilities, to allow each agency to clarify who will do what, organize their joint and individual efforts, and facilitate decision making. Our analysis of NIOC and NIWG post meeting reports, NBIS tabletop exercise results, and interviews with NBIS agency officials reveals some ambiguity about NBIC’s mission, roles, and responsibilities, particularly during a crisis. Officials from 8 of 14 components we interviewed expressed uncertainty about NBIC’s role during a response relative to the biosurveillance capability provided by other agencies in the course of their routine, mission-critical duties. In large part, these officials said that if they had information to share that might involve a biological emergency, they would be more likely to interact with DHS’s National Operations Center (NOC), at which NBIC has representation, than directly with NBIC. The after action report, as well as comments from these officials, show that such questions about NBIC’s response role manifested during a recent tabletop exercise. In our interviews, officials from seven components expressed concerns about NBIC’s role in the exercise or real life events, ranging from lack of clarity about what role NBIC played or should play to statements that the exercise showed clearly that NBIC has no proper role in event response. According to the memo that the moderator prepared after the tabletop exercise, although the NOC did not participate, some participants thought NBIC would have been bypassed in favor of the NOC. They said the NOC would perform the essential biosurveillance integration roles of coordinating and disseminating information across agencies, states, and the private sector. In addition, the memo notes that exercise participants were not in agreement about the proper role for NBIC in ongoing collection and dissemination of data specific to an identified event. Among the recommendations in the after-action memo was for NBIC to work internally with the appropriate DHS parties, including the NOC, to write protocols defining the NBIC role inside DHS. According to NBIC officials, they have followed up on this recommendation, by among other things, exploring it through the NIWG. Additionally, NBIC, the NOC, and other stakeholders have initiated discussions about how to develop appropriate protocols. A related issue that came to light during the tabletop exercise and was a theme in interviews with NBIS officials is the extent to which NBIS partners trust NBIC to use their information and resources appropriately. According to the exercise after-action memo, participants repeatedly raised concerns about trusting NBIC with data, and participants also expressed concern that NBIC would reach the wrong conclusions or disseminate erroneous data or reports. Similarly, in our semistructured interviews, officials from 5 of 14 components said they were cautious about sharing data or information with NBIC because they lack confidence that NBIC will either interpret it in the appropriate context or reach back to the agency to clarify before sharing the data across the whole interagency community. These comments generally noted concerns that NBIC’s lack of contextual sophistication could lead to confusion, a greater volume of unnecessary communication in the biosurveillance environment, or even panic. NBIC officials acknowledged that subject matter expertise from the agencies with frontline responsibility for disease surveillance is essential for drawing appropriate conclusions about emerging situations. However, they also noted that analysts at NBIC have experience with public health and have been building their expertise as the program matures. Clearly identifying how NBIS resources, including personnel details, will be leveraged and establishing institutional roles and responsibilities, could strengthen NBIC’s efforts to obtain buy-in for agencies to fully participate in the NBIS, including by committing to personnel detail arrangements. We have previously reported that federal agencies can use their strategic and annual performance plans as tools to drive collaboration with other agencies and partners. Such plans can also reinforce accountability for collaboration by establishing performance measures and aligning agency goals and strategies with those of the collaborative efforts. Using established performance measures to evaluate and report on the effectiveness of collaboration could identify ways to improve it. NBIC’s draft strategic plan outlines milestones, goals, objectives, and key tasks needed for NBIC to meet its mission. These tasks include, among other things, defining an information-sharing strategy among its stakeholders, deploying IT to support its mission, and establishing standard operating procedures. However, despite acknowledging that interagency cooperation and collaboration remain a concern to resolve, the strategic plan does not address how NBIC will improve collaboration among current and potential NBIS member agencies or how it will measure collaborative results. NBIC’s draft strategic plan includes one proposed performance metric related to collaboration with NBIS partners—to assess current collaboration activities for relevance and contribution to NBIS mission requirements. However, the plan lacks a discussion of strategic objectives to achieve collaboration and, correspondingly, lacks associated measures and targets to monitor efforts to achieve collaborative results. Strategic objectives for collaboration and associated targets and measures could provide NBIC with a critical tool to help ensure that it appropriately focuses its efforts on enhancing collaboration with NBIS members and that the desired results are achieved. NBIC has the means to engage NBIS partners through the organizations that help organize and manage the NBIS community—the NIOC and the NIWG—but our analysis shows the integration center has not yet fully leveraged these groups to develop effective collaboration strategies. The purpose of the NIOC and NIWG governance bodies is to provide strategy and policy advice on the operation of the NBIS. Information on the status of NBIC’s efforts to achieve its mission has been provided to the NIOC, an oversight council serving the NBIS community, but substantive discussion of strategies for overcoming barriers to collaboration that impact NBIC’s execution of its mission did not occur during meetings with the NIOC. For example, post meeting reports from the NIOC—the higher level strategic governance body for the community of NBIS partners—show that the NBIC director routinely gave a status update of the MOUs and interagency agreements for each agency, during which agencies report the status from their perspective. However, in these segments of the NIOC meetings, the post meeting reports reflect little, if any, discussion of the reasons NBIS agency officials cited in our interviews for not finalizing the agreements. Neither do the reports show any focused effort to discuss barriers to participation or solutions to working across agency boundaries. The NIWG—operational level working group—post meeting reports between March 2008 and May 2009 reflect only one discussion during which the need to finalize agreements was addressed. Although the NIWG has formed a sub-working group specifically to address collaboration, our review of the post meeting reports shows that neither the full NIWG nor the sub-working group has been effectively engaged in a focused effort to identify, discuss, and address challenges to working across agency boundaries. According to NBIC officials, they place contentious issues before the NBIS governance structure in a way that may not be clearly captured in post meeting reports. NBIC officials noted that the post meeting reports do not clearly reflect the numerous times they have made proposals for solutions to problems and have been met with silence from the attendees. However, they acknowledge that they have approached the NBIS governance bodies seeking buy-in for their proposals for tactical and operational approaches rather than an open-ended discussion seeking strategic solutions to the broader barriers to information and resource sharing. Leveraging these bodies to get meaningful input from NBIS- partner leadership could help NBIC ensure that it is able to identify commonly accepted solutions to working across agency boundaries. Enhancing the federal government’s ability to detect and warn of biological events of national concern and to provide better situational awareness for response to those events depends on multiple actors inside and outside the federal government to work together effectively. The 9/11 Commission Act charged NBIC with early detection and situational awareness, but both the act and the operational guidance NBIC has developed acknowledges that this is to be done, in large part, through the NBIS—a multi-agency collaborative community. Despite the critical role of this collaborative community in achieving the act’s charge, the act does not require any specific agency to participate in the NBIS or to support the integration center. Therefore, it is imperative that NBIC employ collaborative practices to enhance and sustain collaboration across the NBIS so that this community of federal partners are fully and effectively engaged in pursuit of the overarching missions of early detection and enhanced situational awareness. Although NBIC has made some efforts to strengthen relationships with and solicit participation from NBIS partners, working with the NBIS to develop a strategy for collaboration that includes key collaboration practices identified in our previous work could help the integration center promote more effective collaboration. During the course of our review, officials from the NBIS community recounted a number of constraints on their participation, including concerns about the clarity of NBIC’s mission and the ends to which shared information and resources would be used. We have previously reported that having a mission statement helps to clarify an agency’s focus and purpose. Moreover, our prior work on enhancing and sustaining collaboration in the federal government advises that practices such as articulating common outcomes, identifying appropriate resources to be shared, clarifying roles and responsibilities, and developing mechanisms to monitor performance and accountability could help NBIC address barriers to collaboration. However, NBIC has not formulated goals and objectives for overcoming barriers to collaboration and has no supporting performance and accountability mechanisms—such as performance measures—to help ensure that they are pursuing those goals effectively. In addition, although NBIC has created the NIOC and NIWG to provide strategic and operational advice on how the NBIS should function, NBIC had not effectively engaged them in a focused effort to identify shared solutions for overcoming barriers to collaboration and creating buy-in for joint strategies, policies, procedures, roles, and responsibilities. A strategy for helping ensure that NBIC applies key collaborative practices effectively and consistently, that draws on the existing intellectual resources of its strategic partners in the NIOC, and that includes mechanisms to monitor performance and accountability for collaborative results, may help NBIC and NBIS partners to identify and overcome challenges to sharing data and personnel for the purposes of earlier detection and enhanced situational awareness of potentially catastrophic biological events. In order to help NBIC ensure that it effectively applies practices to enhance and sustain collaboration, including the provision of data, personnel, and other resources, we are making the following two recommendations to the Director of NBIC: In conjunction with the NIOC, finalize a strategy for more effectively collaborating with current and potential NBIS members, by (1) clearly defining NBIC’s mission and purpose, along with the value of NBIS membership for each agency; (2) addressing challenges to sharing data and personnel, including clearly and properly defining roles and responsibilities in accordance with the unique skills and assets of each agency; (3) developing and achieving buy-in for joint strategies, procedures, and policies for working across agency boundaries. Establish and use performance measures to monitor and evaluate the effectiveness of collaboration with current and potential NBIS partners. We provided a draft of this report for review and comment to the following agencies: DHS, HHS, USDA, and the Departments of Commerce, Defense, Interior, Justice, State, Transportation, and Veterans Affairs, as well as the Environmental Protection Agency and the United States Postal Service. DHS provided written comments on December 10, 2009, which are summarized below and presented in their entirety in appendix I of this report. HHS, USDA, and the Departments of Commerce, Defense, Interior, Justice, Transportation, and Veterans Affairs, as well as the Environmental Protection Agency and the United States Postal Service did not provide written comments. We incorporated technical comments from DHS, USDA, and the United States Postal Service where appropriate. DHS generally concurred with our findings and recommendations and stated that NBIC will work with the NIOC and all NBIS partners to develop a collaboration strategy to clarify both the mission space and roles and responsibilities of all NBIS partners. DHS has taken initial steps to implement our recommendations. For example, DHS noted that at the December 9, 2009, quarterly NIOC meeting, the Assistant Secretary of Health Affairs and Chief Medical Officer for DHS, Dr. Alex Garza, referenced this report’s findings and challenged NIOC members to work to resolve and address confusion regarding NBIS and NBIC. We are encouraged by DHS’s efforts to engage the NIOC to identify and overcome barriers to collaboration; continuing to work with the NIOC to develop and finalize a strategy for collaboration could help NBIC overcome challenges to sharing data and personnel. In addition, monitoring the effectiveness of collaboration through the use of performance metrics could help NBIC ensure they are progressing towards their goal of obtaining the resources necessary to accomplish its mission of early detection and situational awareness of biological events of national concern. While DHS stated that we clearly identify the challenges faced by NBIC in carrying out its mission, the department also commented that the lack of a legal requirement for other federal agencies to participate in the NBIS prevents DHS from compelling the cooperation that is needed to ensure success of the NBIC mission. As we noted in our report, the lack of a legal requirement is what makes the effective use of collaboration best practices crucial for NBIC to be successful. We are sending copies of this report to the Secretary of Homeland Security, Secretary of Health Human and Services, Secretary of Agriculture, Secretary of Commerce, Secretary of Defense, Secretary of Interior, Attorney General, Secretary of State, Secretary of Transportation, and the Secretary of Veterans Affairs, as well as the Administrator of the Environmental Protection Agency, the Postmaster General, the Director of NBIC, and interested congressional committees. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. William O. Jenkins, Jr Director, Homeland Security and Justice Issues . In addition to the contact named above, Anne Laffoon, Assistant Director; Michelle Cooper; Clare Dowdle; Kathryn Godfrey; and Andrea Yohe made significant contributions to the work. Keira Dembowski, Susanna Kuebler, Alberto Leff, and Juan Tapiavidela also provided support. Amanda Miller assisted with design, methodology, and analysis. Tracey King provided legal support. Linda Miller provided communications expertise.
Recently, there has been an increased focus on developing the ability to provide early detection of and situational awareness during a disease outbreak. The Implementing Recommendations of the 9/11 Commission Act sought to enhance this capability, in part, by creating the National Biosurveillance Integration Center (NBIC) within the Department of Homeland Security. NBIC is to help provide early detection and situational awareness by integrating information and supporting an interagency biosurveillance community. The act directed the Government Accountability Office (GAO) to report on the state of biosurveillance and resource use in federal, state, local, and tribal governments. This report is one in a series responding to that mandate. This report focuses on the actions taken by NBIC to (1) acquire resources to accomplish its mission and (2) effectively collaborate with its federal partners. To conduct this work, GAO reviewed documents, such as NBIC's Concept of Operations, and interviewed officials at NBIC and 11 federal partners. To carry out its early detection and situational awareness mission, NBIC has made efforts to acquire data from the integration center's community of federal partners, obtain analytical expertise from other agencies, establish governance bodies to develop and oversee the community of federal partners, and provide information technologies to support data collection, analysis, and communication. However, NBIC does not receive the kind of data it has identified as most critical for supporting its early detection mission--particularly, data generated at the earliest stages of an event. In addition, NBIC has faced challenges leveraging the expertise of its federal partners. For example, NBIC officials have emphasized the importance of agencies temporarily assigning personnel to supplement the expertise at NBIC. However, only 2 of 11 partner agencies have assigned personnel to support the integration center. NBIC has developed governance bodies that provide oversight for the integration center and the interagency community. Although the integration center has also developed an information technology system, it is primarily used to help identify and collect publicly available Internet data because NBIC lacks data from federal partners that best support the early detection goal of biosurveillance. NBIC is not fully equipped to carry out its mission because it lacks key resources--data and personnel--from its partner agencies, which may be at least partially attributed to collaboration challenges it has faced. Integrating biosurveillance data is an inherently interagency enterprise, as reflected by both law and NBIC's strategy for meeting its mission. NBIC is to help coordinate and support a community of federal partners for early detection and enhanced situational awareness. Consequently, for NBIC to obtain the resources it needs to meet its mission, it must effectively employ collaborative practices. However, in interviews with partner agencies, GAO encountered widespread confusion, uncertainty, and skepticism around the value of participation in the interagency community, as well as the mission and purpose of NBIC within that community. Further, interviews with agency officials demonstrated a lack of clarity about roles, responsibilities, joint strategies, policies, and procedures for operating across agency boundaries. We have previously reported on key practices that can help enhance and sustain collaboration among federal agencies. For collaborating agencies to overcome barriers to working together, they need to, among other things, (1) develop a clear and compelling rationale for working together by articulating a common federal outcome or purpose; (2) establish joint strategies, policies, and procedures to help align activities, core processes, and resources; (3) identify resources needed to initiate or sustain their collaborative effort; (4) work together to define and agree on their respective roles and responsibilities; and (5) develop accountability mechanisms to guide implementation and monitoring of their efforts to collaborate. Development of a strategy for collaboration and the use of these key collaboration practices could enhance NBIC's ability to foster interagency data and resource sharing.
The Army Corps of Engineers (the “Corps”) and the Department of the Interior’s Bureau of Reclamation (the “Bureau”) operate about 130 hydropower plants at dams throughout the nation. These plants generate electricity from the flow of water that is also used for other purposes, including fish and wildlife enhancement, flood control, irrigation, navigation, recreation, and water supply. Since about the 1930s, electricity that is generated by these hydropower plants has played an important role in electricity markets. These plants were a key element in electrifying rural and sparsely populated areas of the nation. These plants account for over 35,000 megawatts (MW) of generating capacity (or about 5 percent of the nation’s total electric supply) in 1998. The Department of Energy’s power marketing administrations (PMA) generally market the electricity generated at these plants to wholesale customers (the “power customers”), such as rural electric cooperatives and municipal utilities, that in turn sell the electricity to retail customers. (Fig. 1.1 shows the service areas of the PMAs.) Revenues earned from the sale of this electricity totaled over $3 billion in fiscal year 1997. These revenues pay for the operation and maintenance of the government’s electricity-related assets and repay a portion of the outstanding federal appropriated and other debt of about $22 billion for the Bureau’s and the Corps’ power plants, related PMA transmission lines, and well as certain related federal investments for irrigation, water supply, and other facilities that are to be repaid over time from electricity revenues. The revenues also pay interest on the outstanding appropriated debt, where applicable. In traditional markets, electric utilities enjoyed relative certainty about the amount of demand they would have to satisfy in the future. A compact existed between utilities and state public utility commissions. Utilities were obligated to serve all existing and future customers in their pre-established service areas. In return, utilities were granted monopolies within their service areas and approved rate schedules that guaranteed stated earnings on their operating costs and investments. They forecasted the load they would serve by using econometric and end-use analyses models over future periods of time that were as long as 20 years. They collected sufficient funds in their electric rates to pay for needed generating capacity and to operate, maintain, and repair existing power plants and other electricity assets. The funds collected through rates also include profits. However, the nation’s electricity markets are undergoing significant changes. The Energy Policy Act of 1992 significantly increased competition in wholesale electricity markets. In addition, competition at the retail level is now arriving. According to the Department of Energy’s Energy Information Administration, as of March 1999, 18 states had acted—by legislation that had been enacted (14 states) or by regulatory order (4 states) —to restructure electricity markets. Regulators in these states expected that industrial, commercial, and, ultimately, residential consumers would be able to choose their electricity supplier from among several competitors, rather than being tied to one utility. As competition increases, the rates paid by consumers for electricity have dropped and should continue to do so. For example, according to the Energy Information Administration, as a result of such factors as emerging competition and new, more efficient generating technologies, retail electricity rates decreased by about 25 percent from 1982 through 1996, after factoring in the impact of inflation. The administration expects electricity rates to continue to decrease in real terms by 6 percent to 19 percent by 2015. In recent years, uncertainty about the pace and extent of competitiveness in electric markets has caused utilities to be more flexible. Utilities have relied more on purchasing electricity from other sources or acquiring new power plants, such as smaller natural-gas-fired plants, that are less expensive and more flexible for meeting shifting demand. They have also cut costs by reorganizing and reducing staff, and they have consolidated or merged with other utilities where they believed it was appropriate. For example, after years of virtually no mergers, from October 1992 to January 1998, investor-owned utilities had proposed over 40 mergers and completed 17 of them, according to the Edison Electric Institute. In addition, according to utility officials, some utilities are retiring or divesting some high-cost power plants, while others are buying those same plants to serve a niche in their resource portfolios. According to utility officials, in more stable electricity markets, utilities and federal agencies maintained and repaired their hydroelectric and other power plants according to a schedule that was predetermined by the manufacturer’s specifications and the operating history of the plant. Maintenance and repairs were frequently made at this predetermined time whether or not they were needed. Because maintenance or repairs could have been performed later or less frequently, perhaps with lower costs, some Bureau and utility officials that we contacted characterized these practices as over-maintenance of the hydropower plants. These practices, according to an industry consultant, were seldom questioned partly because of the low costs and resiliency of hydropower plants—especially of those placed into service during the 1950s. However, as markets become more competitive, federal agency, utility, and electric industry officials have increasingly viewed hydropower plants as particularly useful to utilities’ overall operations. One of hydropower’s important traits is its flexibility in meeting different levels of demand. This characteristic, according to utility officials, means that hydropower plants will likely continue to play a significant role in meeting demand during peak periods and providing ancillary services, without which electricity systems cannot operate. Currently, utilities provide these services routinely. However, according to Bureau, PMA, and utility officials, depending upon actions taken by federal and state regulators in the near future, a separate market may develop for ancillary services. These services may be priced separately and may allow utilities with hydropower to capture a market niche and earn additional revenues. In response to new markets and perceptions about the role of hydropower in those markets, federal agencies and some utilities have reconsidered how they operate, maintain, and repair their hydropower plants. For example, some utilities have implemented less-expensive, more-flexible maintenance practices, which consider such factors as the generating size of a utility’s hydropower plants, those plants’ roles in the utility’s generation portfolio, and marketing and economic considerations. One such approach, called “Reliability Centered Maintenance,” is defined as a maintenance philosophy that attempts to make use of the most logical, cost-effective mix of breakdown maintenance, preventive maintenance, and predictive testing and proactive maintenance to attain the full life of the equipment, reduce maintenance costs, and encourage reliable operations. For example, according to some utilities we contacted, in determining when to maintain or repair equipment, they are relying increasingly on the use of monitoring equipment to detect changes in the operating conditions of the equipment, instead of performing those actions in a prescheduled manner, as in the past. On the basis of these examinations, the utility may decide to repair or replace the component. Alternatively, the utility may decide to stretch out the operation of the component to the point of near-failure. Some components may actually be run until they fail. However, according to Corps and utility officials, in the cases of some smaller hydropower units, installing monitoring equipment at a cost of $200 to $500 per unit may not make economic sense. Other measures may also be used to monitor the operating condition of equipment. For example, the Corps tests the lubricating oil to indicate the condition of its generating equipment. Also, in some cases, when deciding how and when to maintain and repair generating units, management now considers the plant or the unit as an individual cost center that must make a positive contribution to the utility’s bottom line. In such an environment, plant managers will become more aware of the production costs and will exert increased pressures to cut costs at the plant and at the corporate levels. Plant managers may become aware that a utility may actually shut down and sell a generating unit if operating or repairing it does not return a required, positive financial return. As market competition intensifies, utilities will face increasing pressures to operate as efficiently and cost-effectively as possible. Utilities’ management will need to know how well their plants are producing electricity in order to make informed decisions about how to allocate scarce dollars for maintaining and repairing power plants, where to cut costs, or, in more extreme cases, which generating units to sell or shut down. An important concept for defining power plants’ performance is the “reliability” with which plants generate electricity. Within the electric utility industry, power plants are viewed as “reliable” if they are capable of functioning without failure over a specific period of time or amount of usage. The availability factor and the related outages factors are widely accepted measures of the reliability of power plants. The time a generating unit is “available” to generate electricity is the time it is mechanically able to generate electricity because it is not malfunctioning unexpectedly or because it is not being maintained or repaired. For instance, if a unit were available to generate electricity 8,000 hours out of the 8,760 hours in a year, then its availability factor would be 8,000 hours divided by 8,760 hours, or about 91.3 percent. When a unit is unable to generate electricity because it is broken, being repaired, or being maintained, it is in outage status. Outages are further classified as “scheduled” outages if the unit is unable to generate electricity because it is undergoing previously scheduled repairs or maintenance. If a unit is unable to generate electricity because of an unexpected breakdown and/or if unanticipated repairs need to be performed, then it is in “forced outage” status. If a plant were in scheduled outage status for 100 hours over the course of one year, then its scheduled outage factor would be 100 hours divided by the 8,760 hours in a year, or 1.1 percent. If a plant were in a forced outage status for 600 hours, then its forced outage factor would be 600 hours divided by the 8,760 hours in the year, or 6.8 percent of the time. For any generating unit, the availability factor, the scheduled outage factor, and the forced outage factor, added together, should equal 100 percent because, taken together, they account for a plant’s entire operating status over a period of time. Assessing the performance of a hydropower plant or unit by examining its availability factor calls for understanding additional variables that would affect its performance. Many officials we contacted said that the availability factor needs to be understood in terms of such factors as the role played by the plant in terms of the kind of demand that it meets (for instance, whether it meets peak demand), the availability of water throughout the year, and the purposes satisfied by the dam and reservoir. For example, according to a utility consultant, because water is abundant at the New York Power Authority’s Niagara Power Project, the generating units are used primarily to satisfy nonpeak loads. Therefore, the utility attempts to operate and maintain those units to be on line as much as possible. To do otherwise entails a loss of generating revenues that could be earned almost 24 hours per day. Nevertheless, officials at every utility we contacted said that they achieved an availability of at least 90 percent, and the Bureau and the Corps have formal goals of attaining that availability level. As requested by the Chairman, Subcommittee on Water and Power, House Committee on Resources, we examined the (1) reliability of the Bureau’s and Corps’ hydropower plants in generating electricity compared with the reliability of nonfederal hydropower plants; (2) reasons why the Bureau’s and the Corps’ plants may be less reliable than nonfederal plants and the potential implications of reduced reliability; and (3) actions taken to obtain funding to better maintain and repair the Bureau’s and the Corps’ plants. To compare the generating reliability of the Bureau’s and the Corps’ hydropower plants with nonfederal ones, we obtained, analyzed, and contrasted power plants’ performance data, including availability and outages factors, from the Bureau, the Corps, and the North American Electric Reliability Council. We discussed the limitations of these performance indicators with officials from the Bureau, the Corps, the PMAs, the Tennessee Valley Authority, investor-owned utilities, publicly owned utilities, and other experts in the electric utility industry. To explore why federal hydropower plants sometimes performed at lower levels, we obtained and analyzed various reports on the subject and discussed the topic with representatives of the Bureau, the Corps, the PMAs, various PMA electricity customers or their associations, investor-owned utilities, and nonfederal, publicly owned utilities. Moreover, in addressing the implications of any reduced performance by federal plants, we interviewed industry experts, representatives of investor-owned and publicly owned utilities, officials of the PMAs, and the PMAs’ electricity customers. We also examined studies about the changes in electricity markets. In examining steps to secure funding to better maintain and repair the Bureau’s and the Corps’ plants, we studied the efforts of the Corps, the Bureau, and the PMAs to pay for the maintenance and repair of federal hydropower assets more quickly and with greater certainty. In this regard, we contacted the Bureau, the Corps, the PMAs, and the PMAs’ power customers at several different locations, including Denver, Colorado; Boise, Idaho; Portland, Oregon; and Sacramento, California. At these locations, we also examined any funding agreements concluded by these parties and asked detailed questions about the benefits and other implications of these agreements. Our analysis was based on the assumption that the Bureau’s and the Corps’ hydropower plants, the related facilities, and the PMAs would continue to exist under some form of federal ownership. In examining other steps to secure enhanced funding, we relied to the greatest extent possible upon previous work that we had performed on federal electricity, especially work performed during two prior reviews—Federal Power: Options for Selected Power Marketing Administrations: Role in a Changing Electricity Industry (GAO/RCED-98-43, Mar. 6, 1998) and Federal Power: Outages Reduce the Reliability of Hydroelectric Power Plants in the Southeast (GAO/T-RCED-96-180, July 25, 1996). Our work was performed at many different locations that included various power plants and offices of the Bureau, the Corps, Bonneville, Southeastern, Southwestern, and Western; investor-owned utilities; and publicly owned utilities. We also contacted national and regional industry trade associations. Our work was performed from July 1998 through February 1999 in accordance with generally accepted government auditing standards. Appendix I contains a more complete description of our objectives, scope, and methodology. Within the electric utility industry, power plants are viewed as “reliable” if they are capable of functioning without failure during a specific period of time or amount of usage. From 1993 through 1997, the reliability of the Bureau’s hydropower plants improved, while the Corps’ remained about the same. However, the Bureau’s and the Corps’ hydropower plants are generally less reliable in generating electricity than nonfederal plants.The Bureau’s and the Corps’ hydropower generating units have been in outage status more of the time for forced and scheduled outages. Importantly, the reliability of the Bureau’s and the Corps’ plants in the Pacific Northwest is generally below that of Bureau and Corps plants elsewhere and also below that of nonfederal plants in the region and elsewhere. The Bureau’s and the Corps’ plants in the region account for over half of these agencies’ total generating capacity and almost all of the power marketed by the Bonneville Power Administration (Bonneville)—the largest of the PMAs in terms of power sales. Nationwide, both the Bureau’s and the Corps’ generating units are less available to generate electricity than those of nonfederal utilities and providers; however, the Bureau’s availability factor has been improving, while the Corps’ remained about the same. (See fig. 2.1.) Generating units that have malfunctioned unexpectedly or are undergoing maintenance and repairs are not considered to be available. Generating units that are more available to generate electricity are considered to be more reliable. The availability factor is considered to be a key indicator of reliability, according to the Bureau. From 1993 through 1997, nonfederal hydropower generating units were available to generate electricity an average of 91.3 percent of the time. During that same period, the Bureau’s hydropower units were available an average of 83.3 percent of the time (or 8 percent less than the average for nonfederal units) and the Corps’ hydropower units were available an average of 88.8 percent of the time (or 2.5 percent less than nonfederal units). The availability factor for nonfederal units from 1993 through 1997 was relatively unchanged. The Bureau’s availability factor improved from 80.9 percent of the time in 1993 to 86.6 percent in 1997. The Bureau believes that one reason for its lesser availability factors is that more of its plants are located on pipelines, canals, and water diversion facilities in comparison with most nonfederal plants. The Corps’ availability factor was relatively unchanged—declining slightly from 89.6 percent in 1993 to 89.2 percent in 1997. Corps officials later provided us with data showing an availability factor of 89.5 percent in 1998. Also, the Bureau provided us with data showing an availability factor of 88.5 percent in 1998. If generating units are not available to generate electricity, they are said to be in “outage” status. Because the Bureau’s and the Corps’ generating units were less available to generate electricity than the rest of the industry, they also had higher outages factors. The longer or more frequent its outages, the less available a unit is to generate electricity. (See fig. 2.2.) From 1993 through 1997, the hydropower units of the Bureau were in outage status an average of 16.7 percent of the time, and the Corps’ units were in outage status an average of 11.2 percent of the time. In contrast, nonfederal units were in outage status an average of 8.7 percent of the time. From 1993 through 1997, the Corps’ total outage factor was relatively unchanged, whereas the Bureau’s decreased from 19.1 percent in 1993 to 13.4 percent in 1997. Nonfederal units’ total outages factors were relatively unchanged. Examining the types of outages that occur indicates why generating units were not in service. Along with the availability factor, the forced outage factor is a key indicator of decreasing reliability because it depicts that unexpected outages occurred, thus indicating inconsistent operations. According to the Bureau’s 1996 benchmarking study, the lower the forced outage factor, the more reliable the electricity is considered. From 1993 through 1997, the average forced outage factor for the Bureau was 2.3 percent and the Corps’ was 5.1 percent. The average forced outage factor for nonfederal hydropower units was 2.3 percent—the same as the Bureau’s but less than the Corps’. (See fig. 2.3.) However, it should be noted that the Corps’ forced outage factor declined—from almost 6 percent in 1995 to 4.5 percent in 1997. According to the latest data provided by the Corps, the agency’s forced outage factor declined even further to under 3.2 percent in 1998. According to a Corps official, this improvement is the result of the agency’s $500 million effort, implemented or identified for implementation from fiscal year 1993 through 2009, to rehabilitate its hydropower plants. Scheduled outages are, by definition, anticipated. Nevertheless, scheduled outages factors also reflect the amount of time that a generating unit was off-line and unable to provide a utility’s customers with electricity. According to the Bureau’s 1996 benchmarking study, the longer a scheduled outage, the less efficient the maintenance program. In our view, a more efficient maintenance program would have placed the generating unit into service faster, thereby enabling the utility to provide its customers with more service and hence possibly earn more revenues. In the case of scheduled outages, from 1993 through 1997, the Corps’ average scheduled outage factor was 6.3 percent and the Bureau’s was 14.4 percent. The average scheduled outage factor for nonfederal utilities was 6.4 percent. However, from 1993 through 1997 the Bureau’s scheduled outage rate showed an improvement—decreasing from 17.1 percent in 1993 to 11.3 percent in 1997—while the Corps’ and the industry’s trends in scheduled outages factors were relatively unchanged. (See fig. 2.4.) Taking longer scheduled outages at opportune times is a management decision that may be considered good business practice, even though such decisions decrease a generating unit’s availability to generate electricity. For example, the Bureau and some electric utilities extend scheduled outages to perform additional repairs during periods when the water is not available for generating electricity or the unit is not needed to meet demand. Also, labor costs are minimized by avoiding the payment of overtime wages. However, according to some Bureau, PMA, and utility officials, these practices may change as markets evolve. Hydropower units may need to be available to generate electricity more of the time in order for the utility to take advantage of new market opportunities. For example, supplying an ancillary service, such as providing reserve capacity, may allow a utility to earn added revenues while not actually generating electricity; however, the unit must be in operating condition (“available”) to generate electricity. The reliability of the Bureau’s and the Corps’ hydropower plants in Pacific Northwest is important to the overall reliability of the Bureau and the Corps. The generating units of those plants account for over half of the Bureau’s and the Corps’ total hydropower capacity. In addition, those plants provide almost all of the generating capacity from which Bonneville, the largest PMA, markets electricity. However, the reliability of the Bureau’s and the Corps’ plants in the Pacific Northwest was below that of nonfederal plants in the region. In addition, the reliability of the Bureau’s and Corps’ plants in the region was also generally below that of the Bureau’s and Corps’ plants elsewhere and below that of nonfederal plants in other regions. As shown in chapter 4, Bonneville, the Bureau, and the Corps are undertaking extensive upgrades and rehabilitations of the federal plants. These actions occurred, in part, as a result of the increased funding flexibility provided by the agreements under which Bonneville would directly pay for the operation, maintenance, and repair of these assets. The availability factor of the Bureau’s units improved over time. The availability of the Corps’ units was slightly below that of nonfederal plants, but it declined slightly from 1993 to 1997. However, the Corps’ units had a forced outage status over twice as high as that of nonfederal units in the region, indicating inconsistent plant performance, while the Bureau’s units had a scheduled outage factor that was almost three times that of nonfederal units. From 1993 through 1997, the Bureau’s units in the Pacific Northwest were available to generate power an average of about 78.7 percent of the time, and the Corps’ units were available an average of 85.4 percent of the time. In contrast, nonfederal hydropower units in the region were available an average of 89.7 percent of the time. The Bureau’s availability factor improved from a level of 74 percent in 1993 to 85 percent in 1997, and the Corps’ availability factor decreased from 87.9 percent in 1993 to 85.7 percent in 1997. In contrast, the availability factors of nonfederal units decreased slightly from 91.8 percent in 1993 to 90.3 percent in 1997. In the Pacific Northwest, from 1993 through 1997, the Bureau’s units were in outage status an average of 21.3 percent of the time, and the Corps’ units were in outage status an average of 15.3 percent of the time, compared with an average of 10.3 percent of the time for nonfederal units in the region. The Bureau’s outage factor decreased from about 26 percent in 1993 to 15 percent in 1997, while the Corps’ increased slightly from 12.1 percent in 1993 to 14.3 percent in 1997. The outage factor for regional nonfederal units increased from 8.2 percent in 1993 to 9.7 percent in 1997. The Corps’ units performed more inconsistently than nonfederal units because from 1993 through 1997, the Corps’ units had higher forced outages factors (an average of 6.4 percent) than the Bureau’s units (an average of 1.9 percent) and nonfederal units (an average of 3.1 percent). The Corps’ forced outage factor in 1994 was about 5 percent and increased to over 7 percent in 1995 and 1996, before declining to about 5.6 in 1997. In contrast, the Bureau’s forced outage factor was lower than the nonfederal producers’ but increased from 1.3 percent in 1993 to 1.9 percent in 1997. Nonfederal producers had a forced outage that increased from 1.5 percent in 1993 to 3.2 percent in 1997. According to the Corps’ Hydropower Coordinator, the higher forced outage factor for the Corps’ units in the region pertained to the operation of fish screens and other equipment designed to facilitate salmon migrations around the Corps’ units. This equipment breaks or needs to be maintained, causing decreases in availability. During fiscal year 1998, at the Corps’ McNary and Ice Harbor plants, forced outages related to fish passage equipment were 30 and 15 percent, respectively, of the total hours in which the plants experienced forced outages. However, from 1993 through 1997, the Bureau’s units had higher scheduled outages factors (an average of 19.4 percent) than both the Corps’ units (an average of 8.9 percent) and nonfederal units (an average of 7.2 percent). The Bureau’s scheduled outages factors were far higher than those of nonfederal parties but decreased from 24.7 percent in 1993 to 13.2 percent in 1997. The Corps’ scheduled outage factor decreased from 9.6 percent in 1994 to 8.8 percent in 1997. Nonfederal parties had a scheduled outage factor that increased from 6.7 percent in 1993 to 8.4 percent in 1994 before falling to 6.5 percent in 1997. The Bureau’s and the Corps’ plants were less reliable than nonfederal plants partly because, under the federal planning and budget cycle, they could not always obtain funding for maintenance and repairs when needed. We found that funding for repairs can take years to obtain and is uncertain. As a result, the agencies delay repairs and maintenance until funds become available. In addition, the Anti-Deficiency Act and other statutes require that federal agencies not enter into any contracts before appropriations become available, unless authorized by law. Such delays can lead to maintenance backlogs and to inconsistent, unreliable performance. The PMAs’ electricity generally is priced less than other electricity. However, because markets are becoming more competitive, the PMAs’ customers will have more suppliers from which they can buy electricity. In some power marketing systems—for example, Bonneville’s service area—competition during the mid-1990s allowed some customers to leave or buy some of their electricity from other sources, rather than continuing to buy from Bonneville. Reliability is a key aspect of providing marketable power. For example, according to Bonneville, in large hydropower systems, the PMAs’ ability to earn electricity revenues depends, in part, on the availability of hydropower generating units to generate power. In more competitive markets, the reliability of the federal electricity will have to be maintained or improved to maintain the competitiveness of federal electricity and thus help ensure that the federal government’s $22 billion appropriated and other debt will be repaid. In addition, the Congress, the Office of Management and Budget (OMB), and we have been working to help ensure that the purchase and maintenance of all assets and infrastructure have the highest and most efficient returns to the taxpayer and the government. The federal planning and budgeting process takes at least 2 full years and does not guarantee that funds will be available for a specific project. This affects the ways in which the Bureau and the Corps plan and pay for the maintenance and repair of their hydropower plants. The federal budgeting process is not very responsive in accommodating the maintenance and repair of those facilities—it can take as long as 2 to 3 years before a repair is funded, if it is funded at all. Specifically, the project and field locations of the Bureau and the Corps identify, estimate the costs of, and develop their budget requests, not only for hydropower, but also for their other facilities, including dams, navigation systems, irrigation systems, and recreational facilities. The funding needs of these various assets compete for the funding and repair of hydropower plants may be assigned lower priorities than other items. For example, officials of the Bureau’s office in Billings, Montana, described the budget process they expected to undergo to develop a budget for fiscal year 2000. The process began in August 1997, when the regional office received initial budget proposals from its area offices. During the ensuing months, the area offices; the region; the Bureau’s Denver office; the Bureau’s Washington State office; the Office of the Secretary of the Interior; and OMB reviewed, discussed, and revised the proposed area offices’ and regional office’s budgets, resulting in a consolidated budget for the Bureau and the Department of the Interior. Certainty about expected funding levels will not be obtained until sometime between February 1999, when OMB conveys the President’s budget to the Congress, and the enactment and approval of the Energy and Water Appropriations Act. The time that will elapse from August 1997, when the area offices began their budget processes, and October 1999 (the start of fiscal year 2000) totals 26 months. In addition, funding for the maintenance and repair of the Bureau’s and the Corps’ hydropower plants is uncertain. Agency officials and other policy makers, faced with limited and scarce resources, especially in times of limited budgets, make decisions about where and where not to spend funds. As shown in examples below, funding is not always delivered to maintain and repair hydropower plants, even if the need is demonstrated. According to documentation that the Bureau provided us with, in 1983, detailed inspections of the generating units at the Shasta, California, hydropower plant found that generating components were deteriorating. The Bureau advised one of its federal power customers that it would seek funds in fiscal year 1984 for the repairs. However, OMB did not approve the requests because the units were not “approaching a failure mode.” Later, in 1990, the Bureau issued invitations to bid for the repairs, which, upon receipt ranged from $9 million to $12 million. However, the project was dropped because the Bureau had budgeted only $6 million. In 1992, after an inspection to determine how far the deterioration had advanced, one generating unit’s operations were reduced. The inspectors also recommended repairing the other two units because the gains in generating capacity that would be achieved as a result of the repairs would enable Western to sell more electricity. To fund the repairs, the Bureau requested funds in its fiscal year 1993 budget request; however, according to the Bureau’s records, OMB eliminated the request. The Bureau’s Budget Review Committee recommended that the project not be included in the agency’s fiscal year 1994 budget request and that the Bureau’s regional office “make a concerted effort to find non-federal financing.” The Corps’ Northwestern Division in Portland, Oregon, has also experienced difficulties in funding needed repairs. For example, at the Corps’ hydropower plant at The Dalles, Oregon, direct funding by Bonneville allowed the Corps to accomplish maintenance that, according to Corps officials, in all likelihood would not have been funded because of the funding constraints in the federal budget process. Beginning in late 1993, the Corps began preparing an evaluation report that was submitted to headquarters to replace major plant components on 14 units that had exhibited many problems over the years but were kept in service through intensive maintenance. The Congress approved funding for the major rehabilitation as part of the Corps’ fiscal year 1997 appropriations. However, after 2 of the units were out of service for an extended time, Bonneville and the Corps entered into an agreement in January 1995 for Bonneville to pay for the rewinding of the generator at unit 9. In February 1996, the rewinding of unit 7 was added to the agreement. In addition, Bonneville, in March 1996, agreed to fund the replacement of the excitation systems for The Dalles’ units 15 through 22, which were not included in the major rehabilitation funded by appropriations. Delayed or uncertain funding leads to delays or postponements of needed maintenance and repairs. These delays or postponements can result in maintenance backlogs that can worsen over time. After funding requests are identified and screened, funding may not be made available until up to 3 years in the future. The Corps has estimated a total maintenance backlog of about $190 million for its power plants in Bonneville’s service territory. However, according to Bonneville and Corps officials, the extent to which critical repair items are part of the backlog is a matter yet to be determined. In addition, according to Bonneville and Corps officials, the role of the approximately $190 million estimate for purposes of planning and budgeting under Bonneville’s and the Corps’ funding agreements is subject to debate. The Corps’ Hydropower Coordinator noted that carrying a maintenance backlog is not a bad management practice in and of itself, as long as it can be managed through planning and budgeting techniques. In contrast with the Corps, Bureau officials maintain that they have a policy of not deferring maintenance and repairs they consider to be critical, although noncritical items may be deferred. They added that the Bureau is free to reprogram funds when needed to fund repairs and maintenance. However, we noted that unfunded maintenance requirements for the Bureau exist. In the Pacific Northwest, the Bureau has been able to address these needs by securing new funding sources. Specifically, Bonneville and the Bureau in the Pacific Northwest have signed an agreement under which Bonneville’s power revenues will directly pay for about $200 million of capital repairs at the Bureau’s power plants. According to Bureau officials, some of these repairs would likely not have been made under the existing federal planning and budgeting processes because of limited and declining federal budgetary resources. Therefore, it is doubtful that these maintenance needs could have been addressed in a timely manner without a new funding mechanism. Failure to fund and perform maintenance and repairs in a timely fashion can lead to frequent and/or extended outages. These outages force the PMAs or their customers to purchase more expensive power than the federal agencies provided in order to satisfy their contractual requirements. For example, from 1990 through 1992, two or more units of the Corps’ Carters hydropower plant, in Georgia, were out of service at the same time for periods ranging from about 3 months to almost 1 year. A Southeastern official estimated that its wholesale customers had purchased replacement electricity for about $15 million more than they would have paid for power marketed by Southeastern. In another example, Southeastern officials estimated that customers of its Georgia-Alabama-South Carolina system had paid 22 percent more in 1990 than in the previous year partly as a result of extended, unplanned outages. Other factors that led to the rate increase included a drought and increases in operation and maintenance costs at the Corps’ plants. In addition, as previously noted in our Shasta example, the Bureau restricted the operation of one of the plant’s generators in response to deteriorating operating conditions. Although the average nonfederal hydropower generating unit is older (48 years) than the Bureau’s (41 years) and the Corps’ (33 years), the nonfederal units’ availability to generate power is greater than the Bureau’s and the Corps’. This is true because, according to utility officials, utilities ensure that sufficient funds exist to repair and maintain their generating units and thus promote a high level of generating availability. According to officials from three investor-owned utilities or holding companies and four publicly owned utilities with an average of about 2,458 MW of hydropower generating capacity, their hydropower units were available at least 90 percent of the time—sometimes in ranges approximating or exceeding 95 percent. Some officials said they would not tolerate significant reductions in their generating availability because their hydropower units play key roles in meeting demand during peak times. Under the traditional regulatory compact between states’ public utility commissions and utilities, the utilities have an obligation to provide all existing and future loads in their service territories with power. According to utility officials, to comply with these obligations, utilities implement planning and budgeting systems that ensure that they can pay for all necessary maintenance costs as well as critical repairs and replacements in a timely fashion. According to some utility officials, unlike under the federal budgeting system, utilities typically have the financial capability to quickly obtain funding to pay for unexpected repairs to their power plants. According to these officials, utilities are also able to accumulate funds in reserves to meet future contingencies, such as unexpected breakdowns and repairs of generating units. In addition, issuing bonds but allowing work to begin prior to the bond’s issuance is another tool that utilities use to pay for and make repairs very quickly. For example, according to officials of the Douglas County Public Utility District, the utility district can respond quickly to an unexpected breakdown because (1) it has access to some reserve funds, (2) its commissioners can approve funding via the issuance of bonds up to 18 months after work was begun on a repair, and (3) its budgeting process is fast and accurate. For example, the utility district in January 1999 was completing work on the budget for the next fiscal year that would begin in only 8 months—namely, August 1999. The budget for the utility district’s hydropower project reflects funding requirements for operations, maintenance, anticipated repairs, and debt service, on the basis of the long-term operational and financial history for the project. According to Bonneville, the agency is achieving a similar effect by being able to quickly provide access to funds and establish reserve funds through agreements whereby its funds directly pay for the operation, maintenance, and repair of the Bureau’s and the Corps’ hydropower plants. In competitive markets, the price being charged for the electricity and the reliability of that electricity will continue to be important factors that consumers will consider when making purchasing decisions. On average, the electricity sold by the PMAs has been priced less than electricity from other sources. However, failing to adequately maintain and repair the federal hydropower plants causes costs to increase and decreases the reliability of the electricity. The PMAs’ rates will have to be maintained at competitive levels, and the reliability of this power will have to be maintained or enhanced to ensure that federal electricity remains marketable. In addition, the Congress, OMB, and we have been working to help ensure that the purchase and maintenance of all assets and infrastructure have the highest and most efficient returns to the taxpayer and the government. Delayed and unpredictable federal funding for maintenance and repairs have contributed to the decreased availability (and reliability) of the federal hydropower generating units as well as to higher costs that can cause rates to increase if those costs are included in the rates. However, in competitive markets, increased rates decrease the marketability of federal electricity, as nonfederal electricity rates are expected to decline. Customers are expected to have opportunities to buy electricity from any number of reasonably priced sources. If the PMAs’ rates are higher than prevailing market rates, customers will be less inclined to buy power from the PMAs. According to the Department of Energy’s Energy Information Administration, retail rates nationwide by 2015 may be about 6 to 19 percent (after inflation) below the levels that they would have been if competition had not begun. In certain PMA systems—for example, the Central Valley Project, which, as of fiscal year 1997, had an appropriated and other debt of about $267 million—the PMAs’ electricity (in this case, supplied by Western) is already facing competition from nonfederal generation. If the price of the PMAs’ electricity exceeds the market price, then its marketability would be hampered. “. . . financial viability would also be jeopardized if the gap between rates and the cost of alternative energy sources continues to narrow. Such a scenario could cause some customers to meet their energy needs elsewhere, leaving a dwindling pool of ratepayers to pay off the substantial debt accumulated from previous years.” In Bonneville’s service area, during the mid-1990s, competition decreased nonfederal electric rates, resulting in some customers leaving or buying power from less expensive sources, rather than continuing to buy from Bonneville. Similarly, in the case of the Tennessee Valley Authority (TVA)—a federally owned corporation that supplies electricity in Tennessee and six other Southeastern states), TVA’s sales to industrial customers declined from about 25 billion kWh in 1979 to 16 billion in 1993 after double-digit annual rate increases. Various actions have been used to fund the maintenance and repair of federal hydropower facilities. If these actions work as intended, they have the potential to deliver dollars for maintenance and repairs faster and with more certainty than before these actions were implemented. By enabling repairs to be made on time, they have the potential to help improve the reliability of the PMAs’ electricity and to continue its existing rate-competitiveness. Hence, these actions can help to secure the continued marketability of the PMAs’ electricity and promote the repayment of the appropriated and other debt. However, these various actions may reduce opportunities for congressional oversight of the operation, maintenance, and repair of federal plants and related facilities and reduce flexibility to make trade-offs among competing and changing needs. Aware of the problems involved in securing funding through federal appropriations, the Bureau, the Corps, the PMAs, and PMA customers have begun to take actions to secure the funding that is required to maintain and repair the federal hydropower plants and related facilities. An example is the Bureau’s, the Corps’, and Bonneville’s agreements in the Pacific Northwest, concluded from 1993 to 1997 and made pursuant to the Energy Policy Act and other statutes. According to Bureau officials, these funding arrangements were caused by budget cuts during the 1980s. They added that the need to perform about $200 million in electricity-related maintenance in the near future would strain the agency’s ability for maintenance and repairs in a steady, predictable fashion. These officials said that, as a result of these funding shortfalls, maintenance backlogs accumulated and the generating availability of the federal power plants in Bonneville’s service area declined from 92 to 82 percent. In response, in 1988, the Secretary of the Interior requested that the Congress authorize Bonneville to directly fund certain maintenance costs. Such authority was granted in provisions of the Energy Policy Act, which authorized the funding agreements between Bonneville, the Bureau, and the Corps. Under these agreements, Bonneville’s electricity revenues will directly pay for over $1 billion of routine operations and maintenance as well as capital repairs of the Bureau’s and the Corps’ electricity assets in Bonneville’s service territory. The agencies expect to be able to plan and pay for maintenance and repairs in a systematic, predictable manner over several years. The agencies expect that the resulting funding will allow them to respond with greater flexibility and speed to the need to repair hydropower plant equipment. According to Bonneville, the funding agreements will create opportunities for the increased availability of hydropower, financial savings, and the increased revenues. In addition, Bonneville believes that increased demand for its electricity and the increased financial resources provided by the funding agreements will improve its competitive viability and ability to recover the full cost of the electricity system from which it markets power. The Bureau and Bonneville signed two agreements for Bonneville’s electricity revenues to pay up front for capital repairs and improvements as well as ordinary operations and maintenance of the Bureau’s electricity assets in Bonneville’s service area. In January 1993, the Bureau and Bonneville executed an agreement that provided for funding by Bonneville of specific capital items, as provided by subsequent “subagreements.” To date, several subagreements have been signed under which Bonneville will pay, up front, up to about $200 million for major repairs of the Bureau’s hydropower plants in Bonneville’s service territory. For example, Bonneville will spend about $125 million from 1994 through 2007 for upgrades of the turbines of 18 generating units at the Bureau’s Grand Coulee power plant, in Washington State. In addition, in December 1996, the Bureau and Bonneville executed an agreement whereby Bonneville agreed to directly pay for the Bureau’s annual operations and maintenance costs as well as selected “extraordinary maintenance,” replacements, and additions. The parties anticipated that funding under terms of the agreement would total about $243 million—ranging from about $47 million to about $50 million per year from fiscal year 1997 to fiscal 2001. The Corps and Bonneville have also signed two agreements that allow Bonneville’s electricity funds to directly pay for the operation, maintenance, and repair of the Corps’ electricity assets. The first agreement, signed in 1994, was implemented by a series of subagreements, under which about $43 million in capital improvements and emergency repairs are being funded by Bonneville’s electricity revenues. For example, under one subagreement, about $29 million will be spent for reliability improvements at 21 of the Corps’ power plants throughout Bonneville’s service area. Bonneville is also paying for over $5 million in repairs at The Dalles, Oregon, power plant that were requested but not approved under the appropriations process. Other work at The Dalles is currently funded by appropriations. In December 1997, Bonneville and the Corps signed a second agreement under which Bonneville will directly pay for annual operations and maintenance expenses, for Bonneville’s share of joint project costs allocated to electricity revenues for repayment, and for some small replacements at the Corps’ projects from which Bonneville markets electricity. The implementation of this agreement will begin in fiscal year 1999 with an established budget of $553 million from fiscal 1999 through fiscal 2003—about $110 million per year. Because the implementation of the Pacific Northwest funding agreements is still relatively new, it is too early to determine if they will result in improvements to the availability factors of the Bureau’s and the Corps’ hydropower plants. At the same time, these efforts include a comprehensive attempt, that in our view, establishes systematic methods for identifying and budgeting for routine operations and maintenance, as well as for capital repairs, rehabilitations, and replacements of the federal hydropower plants in the region. For example, pursuant to the December 1996 funding agreement, the Bureau prepares an annual operations and maintenance budget by identifying major line items for each project during the next fiscal year. The Bureau also prepares 5-year budgets, on the basis of estimated budgets for each of the years that are included. The funding totals for the 5-year period cannot be exceeded, although any expenditures in a year that are less than the targeted amount are carried over to future years as accounted for in a “savings account.” The Bureau and Bonneville formed a “Joint Operating Committee” to vote on and approve the annual and 5-year budgets as well as any modifications to the budgets. Similarly, the December 1997 operations and maintenance funding agreement between the Corps and Bonneville features annual and 5-year budgets that are voted upon and approved by the Joint Operating Committee. Five-year budget totals cannot be exceeded without the Committee’s approval, but the reallocation of funds is possible. In addition, if “savings” occur in any year, they are shared between Bonneville and the Corps and/or carried over to future years. In addition, annual budgets are proposed and approved less than 1 year in advance instead of 2 to 3 years in advance—as under the traditional federal appropriations process. These budget practices reflect more immediate considerations and, in the views of agency officials, are more realistic than budgets that have to be compiled 2 to 3 years ahead of time. The potential advantages of the funding agreements in the Pacific Northwest include enhancing the agencies’ ability to accumulate funds in the “savings accounts” to pay for emergency repairs, as provided by the agreement. According to Bureau officials, the savings can be reallocated between projects on the basis of a telephone call between the Bureau and Bonneville. The ability of nonfederal utilities to quickly access reserve funds to meet emergency needs was mentioned by some nonfederal utilities when they discussed their planning and budgeting processes with us. In addition to the funds in savings, a variety of funding sources can be used to pay for maintenance and repairs, including emergency actions. For instance, according to Bureau officials, if unexpected repairs need to be performed, moneys to pay for them may be obtained via a subagreement between the Bureau and Bonneville. Work on the repairs could begin prior to Bonneville’s and the Bureau’s signing of the subagreement. According to Corps officials, some ongoing rehabilitations of the Corps’ Bonneville and The Dalles projects will continue to be funded with appropriations; however, maintenance or repairs to be supported under the funding agreements will no longer be included in the Corps’ budget requests for appropriations. To pay for the maintenance and repair of the Bureau’s and the Corps’ hydropower plants, Bonneville can use its cash reserves or its bonding authority. Because the agreements provide more secure and predictable funding, the Bureau and the Corps have begun to exercise greater flexibility in how they maintain and repair their hydropower plants. Consistent with evolving market competition and with the actions of nonfederal utilities, Bureau and the Corps officials said their personnel will rely less on traditional, prescheduled maintenance and rely more on newer, more flexible maintenance philosophies, such as reliability-centered maintenance. For example, according to Bureau officials at the agency’s Pacific Northwest region, staff at the region’s electricity projects schedule maintenance and repairs, in part, by using a database that shows when maintenance and repairs were last performed and when a part may need maintenance or repairs in the future. Repairs or upgrades will be increasingly made “just-in-time” on the basis of test results. Bureau officials characterized their maintenance philosophy as evolving to be more responsive to Bonneville’s marketing requirements as well as to reduce costs. According to these officials, because they now have funds that can be used to pay for emergency repairs, they can take prudent risks in managing their maintenance requirements by deferring some repairs that perhaps can be made just in time or repairing other items that may have higher priority. For example, according to the managers of the Grand Coulee power plant, the new funding flexibility allowed the Bureau to reschedule the spending of up to about $3 million on repairs at the plant. Direct contributions from customers have been suggested and implemented as one way to improve how the Bureau, the Corps, and the PMAs pay for repairs. Although the use of nonfederal funds to finance federal agencies’ operations is generally prohibited unless specifically authorized by law, several forms of alternative financing have been statutorily authorized by the Congress. Supporters of alternative financing, among them officials from the Bureau, the Corps, the PMAs, and the PMAs’ electricity customers, note that alternative financing allows repairs and improvements to be made more expeditiously and predictably than through the federal appropriations process. They believe that alternative financing could provide more certainty in funding repairs and help address problems such as deferred maintenance at federal plants. Through one type of authorized arrangement, referred to, among other names, as “advance of funds,” nonfederal entities, such as preference customers, pay up front for repairs and upgrades of the federal hydropower facilities. Under federal statutes, such funding must be ensured before work on a project can be started. Such funding arrangements have been proposed and/or implemented in a variety of PMA systems, most prominently Western’s Pick-Sloan Program in Montana, North Dakota, South Dakota, and several neighboring states; Loveland Area Projects in Colorado and nearby states; Hoover and Parker-Davis projects in Arizona and Nevada; and Central Valley Project in California. For example, under an agreement executed on November 12, 1997, by the Bureau, Western, and Western’s power customers within the Central Valley Project, the customers agreed to pay up front for electricity-related operations and maintenance and certain capital improvements. These activities are specified in a funding plan developed by a Governance Board that represents the Bureau, Western, and the electricity customers. In approving spending proposals, the Bureau and Western have veto power and two-thirds of the customers represented on the Board must approve a proposal for it to pass. The customers will be reimbursed for their contributions by credits on their monthly electricity bills. However, advance of funds agreements generally are limited in their ability to free the funding for the maintenance and repair of federal electricity assets from the uncertainties of the federal budget process. They supplement rather than completely replace federal appropriations and, therefore, may enhance the certainty of funding for repairs and maintenance but not necessarily provide more speed in obtaining that funding. For example, in Bonneville’s service territory, Bonneville, the Bureau, and the Corps can budget 1 year in advance; however, under the Central Valley Project agreement, the Governance Board approves electricity-related operations and maintenance budgets 3 years in advance to coincide with the federal budget and appropriation cycles for the Bureau and Western. The dovetailing is necessary because federal appropriations are counted upon to fund the balance of the maintenance and repairs of the federal electricity assets. Depending on how they are implemented, the direct funding of maintenance and repairs by electricity revenues and agreements for funding by customers pose the risk that opportunities for oversight by external decisionmakers, such as the Congress, will be diminished. Also, the lack of oversight limits Congress’s flexibility to make trade-offs among competing needs. As the Congress and other decisionmakers examine the need for new arrangements to fund the maintenance and repair of federal hydropower plants, they may need to consider any reduced opportunities for oversight, along with the potential benefits of these funding arrangements. At this time, the Bureau, the Corps, and the PMAs provide such information as the history and background of their power plants; the power plants’ generating capacity and electricity produced; annual electricity revenues, costs, and the repayment status; and related environmental and water quality issues, to the Congress, other decisionmakers, and to the public in general. The means of communicating this information include the PMAs’ annual reports; the PMAs’; the Bureau’s, and the Corps’ Internet Websites; and letters to the appropriate congressional committees. As requested by the Chairman, Subcommittee on Water and Power, House Committee on Resources, we examined (1) the reliability of the Bureau’s and Corps’ hydropower plants in generating electricity compared with the reliability of nonfederal hydropower plants; (2) reasons why the Bureau’s and the Corps’ plants may be less reliable than nonfederal plants and the potential implications of reduced reliability; and (3) actions taken to obtain funding to better maintain and repair the Bureau’s and the Corps’ plants. To compare the generating reliability of the Bureau’s and the Corps’ hydropower plants with that of nonfederal ones, we obtained, analyzed, and contrasted power plant performance data, including availability and outage factors, from the Bureau, the Corps, and the North American Electric Reliability Council (NERC). NERC is a membership of investor-owned, federal, rural electric cooperatives, state/municipal/provincial utilities, independent power producers, and power marketers, whose mission is to promote the reliability of the electricity supply for North America. NERC compiles statistics on the performance of classes of generating units, such as fossil, nuclear, and hydro. The statistics are calculated from data that electric utilities report voluntarily to NERC’s Generating Availability Data System. The data reported to NERC exclude many hydropower units, which, on average, are smaller in generating capacity than those that report to NERC. According to the Department of Energy’s Energy Information Administration, as of January 1998, hydropower in the United States was generated by a total of 3,493 generating units with a capacity of 91,871 megawatts (MW). As shown in table I.1, the federal and nonfederal hydropower generating units included in our report totaled 1,107 generating units and had a total generating capacity of 70,005 MW, or an average generating capacity of 63.2 MW per unit. Therefore, the nonreporting units totaled 2,386, and had a total generating capacity of 21,866 MW, or an average generating capacity of 9.2 MW per unit. To compare the performance of federal hydropower generating units with that of nonfederal units, we used data on hydropower generating units from NERC’s database that excluded federal hydropower generating units. We did not evaluate NERC’s validation of the industry’s data, nor the specific input data used to develop the database. We collected 1998 availability and outage data for the Bureau and the Corps, but we did not present it in our graphs because comparative data for the nonfederal units were not available from NERC at the time we completed our study. We also did not evaluate the specific input data used by the Corps and the Bureau to develop their databases on the performance of federal generating units. Table I.1 depicts some of the characteristics of the hydropower generating units included in our analysis of the performance of the Bureau’s, the Corps, and industry’s generating units. Data for nonfederal units is from 32 nonfederal utilities. Average age of generating units (years) Nameplate capacity of generating units (MW) Average nameplate capacity of generating units (MW) We discussed the limitations of these performance indicators with officials from the Bureau, the Corps, the Tennessee Valley Authority, investor-owned utilities, publicly owned utilities, and other experts in the electric utility industry. To explore why federal hydropower plants sometimes performed at lower levels, we obtained and analyzed various reports on the subject, and discussed the topic with representatives of Bonneville, the Bureau, the Corps, various pwer maketing administration (PMA) power customers or their associations, investor-owned utilities, and nonfederal, publicly owned utilities. In our analysis, we included information obtained from the Tennessee Valley Authority, a federally owned utility with high performance indicators and significant hydropower resources. In addressing the implications of any reduced performance by federal plants, we interviewed industry experts, representatives of investor-owned and publicly owned utilities, and officials of PMA power customers. We also examined studies about the changes in electricity markets and contacted national and regional trade associations. Moreover, we addressed alternative ways of ensuring the enhanced funding of maintenance and repairs of the federal hydropower plants and related facilities. In this regard, to the extent possible, we relied upon previous work that we had performed on federal power, especially work performed during two prior reviews: Federal Power: Options for Selected Power Marketing Administrations: Role in a Changing Electricity Industry (GAO/RCED-98-43, Mar. 6, 1998) and Federal Power: Outages Reduce the Reliability of Hydroelectric Power Plants in the Southeast (GAO/T-RCED-96-180, July 25, 1996). Moreover, we examined the Corps’, the Bureau’s, and the PMAs’ efforts to make power revenues directly finance the maintenance and repair of federal hydropower assets. In this regard, we contacted the Bureau, the Corps, Bonneville, Western, and the PMAs’ power customers and examined various agreements of arrangements to pay for the maintenance and repair of the federal hydropower plants and related facilities. Our work was performed at various locations, including the offices of federal and nonfederal parties. Regarding the Corps, these locations include the agency’s headquarters, Washington, D.C.; the Northwestern Division, Portland, Oregon; the Portland, Oregon, District; and the Nashville, Tennessee, District. Because the Corps’ power operations have been affected by the need to accommodate the migrations of salmon, we also contacted the Walla Walla and Seattle, Washington, Districts, and the Corps’ Bonneville (Oregon) power plant. We visited the Bureau’s offices at the Department of the Interior in Washington, D.C.; Denver, Colorado; the Central Valley Operations Office, Sacramento, California; the Pacific Northwest Region, Boise, Idaho; and the Grand Coulee, Washington, power plant. To gain a perspective on how another federal electricity-generating entity operated its hydropower program, we interviewed TVA officials in Chattanooga, Tennessee. Moreover, we contacted the PMAs at locations including their Power Marketing Liaison Office, U.S. Department of Energy, Washington, D.C.; Bonneville in Portland, Oregon; Southeastern in Elberton, Georgia; Southwestern in Tulsa, Oklahoma; and Western in Golden and Loveland, Colorado, and Folsom, California. Our scope included contacting several PMA customers or associations that represent PMA customers, including the City of Roseville, California; Colorado River Energy Distributors Association, Tuscon, Arizona; the Midwest Electric Consumers Association, Denver, Colorado; the Northern California Power Agency, Roseville, California; and the Sacramento (California) Municipal Utility District. In addition, we contacted several investor-owned utilities, utility holding companies, and nonfederal publicly owned utilities (other than those previously listed) that operate significant amounts (collectively, over 17,000 MW) of hydropower -generating capacity; they included the Chelan County (Washington) Public Utility District; Idaho Power Company; Grant County (Washington) Public Utility District; Douglas County (Washington) Public Utility District; New York Power Authority in Niagara, New York; Pacific Gas and Electric Company, Sacramento, California; South Carolina Electric and Gas; and the Southern Company in Atlanta, Georgia. Our work was performed from July 1998 through February 1999 in accordance with generally accepted government auditing standards. On March 6, 1999, the Department of Energy provided technical suggestions for the draft report but deferred to the comments of the Bureau and the Corps on more substantive matters. For example, Energy suggested that we clarify the differences between “reliability” and “availability.” The report already discussed that plants are viewed as reliable, within the electric utility industry, if they can function without failure over a specific period of time or amount of usage. The report also demonstrates that there are several ways of measuring reliability, including the availability factor and outage factors. Accordingly, we made no substantive changes to the report. The following are GAO’s comments on the Department of the Interior’s (including the Bureau of Reclamation’s) letter dated March 12, 1999. Interior provided us with comments that were intended to clarify its position regarding reliability measures, operation and maintenance, and funding mechanisms. 1. In its cover letter and general comments, Interior stated that the report does a good job in recognizing the funding needs for operating and maintaining electrical-generating facilities. However, Interior stated the report does not articulate in the executive summary, as it does in the body, the initiatives undertaken by the Bureau and the Corps to identify alternative funding sources. We believe that the executive summary adequately addresses the issue of the initiatives undertaken by the Bureau, the Corps, and the PMAs, particularly as they relate to efforts in the Pacific Northwest. Therefore, we did not revise our report. 2. In its cover letter and in general comments, Interior stated that the report does not articulate the fact that the Bureau’s facilities are operated to fulfill multiple purposes, such as providing water for irrigation, municipal and industrial uses, fish and wildlife enhancement, and electricity generation. According to the Bureau, if water is frequently not available for generating electricity, the availability factor is not a good indicator for comparing the reliability of the Bureau’s hydropower-generating units with other units that are not operated under multipurpose requirements. Interior also suggested that the nonfederal projects are freer to maximize power and revenues because they are less affected by multiple purposes. We disagree with the Bureau’s position that the report does not recognize that water is used for multiple purposes and affects how electricity is generated. For example, the executive summary recognizes that the Bureau and the Corps generate electricity subject to the use of water for flood control, navigation, irrigation, and other purposes. In addition, the report recognizes, in chapter 2, that the Bureau and other utilities utilize periods of low water and low demand to perform scheduled maintenance and repairs. This would tend to decrease the availability factors of these entities. The report also states that this practice may be regarded as good business practices. We further disagree that the availability factor is not a good basis for comparing the reliability of different projects. The availability factor is a widely accepted measure of reliability that has validity, as long as it is understood in terms of other factors that affect how plants are operated. Moreover, we disagree that other utilities necessarily operate hydropower plants that are affected less by multiple purposes. In fact, as we have noted previously, for other utilities, the multiple uses of the water are regulated through conditions in the utilities’ hydropower-plant-operating licenses, which are issued by the Federal Energy Regulatory Commission. The Bureau contends that the availability of its plants is affected by the fact that more of the Bureau’s plants are located on pipelines, canals, and water diversion facilities than most nonfederal plants. We recognized this point in chapter 2. 3. In the cover letter and in its general comments, Interior stated that the forced outages factor is a better indicator of reliability than the availability factor for multiple purpose facilities. In addition, in its cover letter, Interior indicated that the Bureau’s benchmarking studies indicate that its plants compare favorably with other plants in the area of reliability. Regarding forced outages factors, our report recognizes that there are several indicators of reliability and the forced outages factor is one of most meaningful. More generally, we disagree with Interior’s conclusion that the Bureau’s plants are as reliable as those of other power providers. As shown in chapter 2 of this report, although the Bureau’s forced outages factors are on par with those of nonfederal utilities, the Bureau’s availability factor is lower, and it has been improving. Moreover, the Bureau’s scheduled outages factors are higher than nonfederal utilities. In its general comments, Interior adds that reliability is a measure of whether a plant can operate when it is needed, while availability is a measure of a unit’s ability to operate within a given time period. These factors, stated the Bureau, can be equated only when a plant is required to operate for the full time of the period. The Bureau added that optimum availability is unique to each plant, depending on such factors as design, time, water supply, location, and cost. As stated in our report, reliability is a measure of a plant’s ability to operate over a specific period or amount of usage. We further agree that the significance of an availability factor should be understood within the context of various factors, some of which are mentioned by the Bureau. We revised chapter 1 to recognize that assessing the performance of a hydropower plant or unit by examining its availability factor calls for understanding additional variables. We added language to reflect that the availability factor needs to be understood in terms of such factors as the kind of demand the plant meets (e.g., whether it meets peak demand), the availability of water throughout the year, and the purposes satisfied by the dam and reservoir. 4. According to Interior, the report implies that the Bureau delays repairs and maintenance, pending the availability of funds. The Bureau stated that it performs repairs and maintenance when needed by reprioritizing funds. We revised the report in chapter 3 to recognize the Bureau’s statement that it reprioritizes funding. However, the example of the delayed repairs because of delayed funding at the Bureau’s Shasta, California, project, illustrates our point that repairs and maintenance are delayed when funds are not forthcoming. 5. Interior stated that the Bureau has undertaken a program to improve its performance by benchmarking its electricity operations against the rest of the industry and is continually striving to improve, given the legal and financial constraints encountered. Our report does not imply that these agencies are operating in an unbusinesslike manner but shows that the Bureau’s availability has improved in the face of financial and budgeting constraints. We revised chapter 1 to recognize the Bureau’s benchmarking effort. 1. Interior commented that the title of the report implies that the Bureau has reduced its operation and maintenance program. Interior stated the Bureau has always implemented preventive and reliability-centered maintenance and that adequate funding for these activities has been available. Chapter 4 of the report recognizes that the Bureau, in particular in the Pacific Northwest, will increasingly practice reliability-centered maintenance and practiced preventive maintenance in the past. However, the efforts of the Bureau’s field locations to engage in direct or advance funding arrangements serves as evidence that faster and more predictable funding is needed. 2. We added “transmission system” to the report, as requested by Interior. 3. We revised the report to indicate that the Bureau’s forced outages rate from 1993 through 1997 was the same as the nonfederal sector’s. 4. According to Interior, our comparing plants of different size and type may distort our conclusions about the performance of the federal and nonfederal plants. We disagree. As shown in appendix I, the federal and nonfederal electrical-generating units in our analysis were about same size because our analysis of nonfederal units excluded about 2,400 smaller ones that averaged about 9 MW of generating capacity. In addition, our decision to include both conventional generators and pump generators in our analysis was based on the fact that the Corps’ performance data did not separate its conventional and pump units. The Bureau itself, in its 1996 benchmarking study, included seven pump units (about 323 MW) at its Grand Coulee and Flatiron plants as conventional generating units. Moreover, although the Bureau has generating units from 1 MW to 700 MW, it used only two MW-size categories (1 to 29 MW, and 30 MW and larger) in comparing the availability and outages factors of its plants to the industry in its 1996 benchmarking study. In addition, our analysis of the availability factors of the Bureau’s hydropower-generating units from 1993 through 1997 showed that among pump generators as well as the size categories zero to 10 MW, 11 to 50 MW, 51 to 100 MW, and 101 to 200 MW, the Bureau’s hydropower units had lower availability factors than the industry as a whole. 5. According to Interior, although the customers funded up to $22 million in repairs for Shasta, the rewind contract was awarded for $8.8 million, including total costs to replace the turbines estimated at $12.2 million. This point is expanded upon under comment 12. 6. Interior disagrees with the statement in the draft report that advance or direct funding arrangements decrease opportunities for congressional oversight. We revised the report to state that, although these arrangements could diminish opportunities for oversight, the Bureau, the Corps, and the PMAs provide such information as the history and background of their power plants; the plants’ generating capacity and electricity produced; annual electricity revenues and costs; and related environmental and water availability issues to the Congress, other decisionmakers, and to the public. The means of communicating this information include the PMAs’ annual reports; the PMAs’, the Bureau’s, and the Corps’ Internet Websites; and letters to the Congress. 7. According to Interior, our statement that “the longer the scheduled outage, the less efficient the maintenance program,” is out of place as it pertains to federal plants. The statement would apply primarily to run-of-the-river plants, according to Interior. The Department noted that federal plants are not allowed to earn more revenues and outages do not have an impact on revenues if water is not available for generating electricity. We believe our report sufficiently addresses these points. We have already noted that performing scheduled outages during times of low water or low demand may constitute good business practice. In addition, we have noted the need to understand such factors as the kind of demand a plant meets (for instance, whether it meets peak demand) and the availability of water for generating power. Our report also states that, as markets evolve to become more competitive, operating plants at higher availability factors may allow the PMAs and utilities to take advantage of new opportunities to earn revenue by selling ancillary services. In addition, we continue to believe that, all things being equal, having plants on-line for longer periods of time is also good business practice, as stated in the Bureau’s 1996 benchmarking report.8. As suggested by Interior, we revised the report to read “Western Systems Coordinating Council.” 9. As suggested by Interior, we revised the report to reflect that three of five units at Shasta were repaired. The other two were not. 10. In response to Interior’s comment, we revised the report to reflect that while the Bureau defers noncritical items, it does not defer items it deems to be critical. Interior also notes that unfunded maintenance requirements do not necessarily indicate a deferred maintenance situation. In our view, any maintenance requirements that are put off until the future are deferred. However, we revised the report to state that deferred maintenance is not problematic as long as it can be managed. 11. As requested by Interior, we added “due to limited and declining federal budgetary sources.” 12. Interior clarified that the costs of rewinding the Shasta units decreased from $10.5 million (low bid) in 1994 to $8.8 million in 1996. The rewind contract was executed in 1996 to increase the rating to 142 MW per unit versus the higher-priced rewind in 1994 to 125 MW per unit. Most importantly, the $21.5 million commitment includes the replacement of turbines in three units that were not included in earlier cost estimates. Because of the new information provided regarding the nature of the additional work at Shasta, we revised our report in chapter 3 to state that the Bureau expanded the scope of work to be performed at the plant. 13. As suggested by Interior, we revised the text to state that the funding arrangements in the Pacific Northwest were necessitated by budget cuts during the 1980s. Also, the need to fund about $200 million in maintenance in the near term would limit the Bureau’s ability to pay for maintenance and repairs in a steady, predictable fashion. 14. As suggested by Interior, we deleted the word “electricity” from the reference to the Bureau’s operation and maintenance budget. 15. As suggested by Interior, we revised the text to eliminate references to “separate” Joint Operating Committees. 16. As suggested by Interior, we changed “defer spending” to “reschedule.” On March 16, 1999, the Department of Defense (including the Army Corps of Engineers) provided us with a letter acknowledging that the Corps’ verbal comments, discussed with us at a March 10, 1999, meeting, had been resolved. The primary verbal comment was that we did not reflect changes in the performance of the Corps’ hydropower plants that occurred in fiscal year 1998. The Corps suggested that we include these data in various graphs in our report. As discussed with Corps officials, we addressed the changes in the Corps’ performance in the text of our report, primarily in chapter 2. However, we declined to show changes in the graphs because the 1998 data were not available for the nonfederal hydropower generating units at the time we completed our review. The following are GAO’s comments on the Department of Energy’s (including the Bonneville Power Administration’s) letter dated March 11, 1999. On March 11, 1999, Bonneville provided us with general and specific comments regarding our draft report. Bonneville noted that in its view, we “sought to conduct a fair assessment of the U.S. Army Corps of Engineers (Corps) and the Bureau of Reclamation (Reclamation) facilities during the time of the study.” 1. Bonneville understood that we were not requested to evaluate the direct-funding agreements in the Pacific Northwest. However, Bonneville suggested that we add language to the report to reflect that the funding agreements between itself, the Bureau, and the Corps contain a systematic approach to maintenance planning and investment that creates opportunities for increased hydropower availability, financial savings, and increased revenues. We believe that our report addresses these points. However, we added language that Bonneville believes these enhancements will be attained as a result of the funding agreements. 2. As noted by Bonneville, our report stated that the availability factors of the Bureau’s and the Corps’ hydropower plants in the Pacific Northwest are lower than in the rest of the nation. Bonneville suggested that we clarify the report, in the executive summary, by stating that Bonneville, the Bureau, and the Corps recognized the lower reliability of the plants in the Pacific Northwest and took action through a series of direct-funding agreements to address the problem. Bonneville further suggested a clarification that during the period 1993 through 1997, the federal agencies undertook extensive upgrades and rehabilitations of the Bureau’s plants partly as a result of the increased funding flexibility provided by the direct-funding agreements. We agreed that these statements would clarify the report and incorporated them. 3. Bonneville noted that the draft report stated that funding maintenance and repair actions through direct customer contributions or through direct payments from the PMAs’ revenues reduced opportunities for congressional oversight. According to Bonneville, the funding arrangement in the Pacific Northwest was specifically supported by the Senate Appropriations Committee in 1997. Bonneville also stated that its annual congressional budget submission includes programmatic information on the operations and maintenance funding that Bonneville plans to provide for the Bureau and the Corps. In response to this and other comments, we revised the executive summary and chapter 4 to show that information is now being made available to the Congress and others about the operation of the federal power program. For instance, the Bureau, the Corps, and the PMAs provide such information as the history and background of their power plants; the plants’ generating capacity and electricity produced; annual electricity revenues and costs; and related environmental and water quality issues to the Congress, other decisionmakers, and to the public. The means of communicating this information include the PMAs’ annual reports; the PMAs’, the Bureau’s, and the Corps’ Internet Websites; and letters to the appropriate congressional committtees. 4. We revised the executive summary as recommended by Bonneville by adding “under the traditional appropriations process.” 5. Bonneville believed that the location of figure 1 in the executive summary was confusing, since it discussed national availability factors but was positioned over the discussion of availability in the Pacific Northwest. We agree and have relocated the figure. 6. The draft’s executive summary stated that some of Bonneville’s power customers are leaving the agency for less-expensive sources. Bonneville stated that some customers left the power administration in an earlier period but the situation today is significantly different, with demand for electricity and other products exceeding the supply. Bonneville stated that increasing demand for its electricity as well as the increased financial resources provided by its funding agreements with the Bureau and the Corps will improve its competitive viability and ability to recover the full cost of the Federal Columbia River Power System. We agreed and revised the report in the executive summary and chapter 4. 7. Bonneville suggested that the final report recognize that, for large hydropower systems, the ability to earn electricity revenues depends on the availability of water and of operable hydropower-generating units. These conditions and other factors must be considered to optimize the maintenance program for the plants from which Bonneville markets electricity. We agreed and revised chapter 3 accordingly. 8. As suggested by Bonneville, we added language to chapter 3 to the effect that like the Douglas County Public Utility District, Bonneville will be able to quickly provide access to funds and establish reserved funds through agreements whereby its funds directly pay for the operation, maintenance, and repair of the Bureau’s and the Corps’ hydropower plants. Peg Reese Philip Amon Ernie Hazera Martha Vawter The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the: (1) reliability of the Bureau of Reclamation's and Army Corps of Engineers' hydropower plants in generating electricity compared with the reliability of nonfederal hydropower plants; (2) reasons why the Bureau's and the Corps' plants may be less reliable than nonfederal plants and the potential implications of reduced reliability; and (3) actions taken to obtain funding to better maintain and repair the Bureau's and the Corps' plants. GAO noted that: (1) the Bureau's and the Corps' hydropower plants are generally less reliable in generating electricity than nonfederal hydropower plants; (2) the reliability of the Bureau's hydropower plants has improved recently, while the Corps' has remained relatively unchanged; (3) from 1993 through 1997, the Bureau's units were available to generate electricity an average of about 83 percent of the time compared with about 91 percent for nonfederal units; (4) the availability of the Bureau's units to generate electricity improved from about 81 percent of the time in 1993 to about 87 percent in 1997; (5) the Corps' units were available to generate electricity an average of about 89 percent of the time during the period 1993 through 1997; (6) the Bureau's and the Corps' units in the Pacific Northwest were available about 79 percent and 85 percent of the time, respectively; (7) the Bureau's and the Corps' plants were less reliable because they could not always obtain funding for maintenance and repairs when needed; (8) GAO found that because of uncertain funding, the agencies delay repairs and maintenance until funds become available; (9) GAO also found that these delays caused frequent, extended outages and inconsistent plant performance; (10) the power marketing administrations' (PMA) electricity is generally priced less than other electricity; (11) however, as markets become more competitive, PMAs' customers will have more suppliers from whom they can buy electricity; (12) as nonfederal electricity rates decline in competitive markets, a portion of the federal government's appropriated and other debt of about $22 billion may be at risk of nonrecovery if the federal electricity does not continue to be marketable; (13) a factor affecting the marketability of this electricity is its reliability; (14) Congress, the Office of Management and Budget, and GAO have been working to help ensure that the purchase and maintenance of all assets and infrastructure have the highest and most efficient returns to the taxpayer and the government; (15) the Bureau, the Corps, and the PMAs have taken actions to obtain funding to maintain and repair their hydropower plants; (16) these actions involve directly funding maintenance and repairs from the PMAs' electricity revenues or from funds contributed by the power customers; and (17) by enabling repairs to be made in a timely manner, these actions have the potential to help to improve the reliability of the PMAs' electricity and to continue their existing rate-competitiveness.
We at GAO use the term human capital because (unlike traditional terms such as personnel and human resource management) it focuses on two principles that are critical in a modern, results-oriented management environment: First, people are assets whose value can be enhanced through investment. As the value of people increases, so does the performance capacity of the organization and therefore its value to clients and other stakeholders. As with any investment, the goal is to maximize value while managing risk. Second, an organization’s human capital approaches must be aligned to support the mission, vision for the future, core values, goals and objectives, and strategies by which the organization has defined its direction and its expectations for itself and its people. An organization’s human capital policies and practices should be designed and implemented to achieve these goals, and assessed accordingly. capital strategies are not appropriate to meet the needs of the nation’s government and its citizens. As agencies wrestle with human capital management, they face a significant challenge in the information management and technology areas. The rapid pace of technological change in these areas is reflected in the investments in information technologies made both in the United States as a whole and by the federal government. By 2004, information technology investments are expected to account for more than 40 percent of all capital investment in the United States. The federal government’s IT investment is conservatively estimated in fiscal year 2002 to be $44 billion—an increase in federal IT spending of 8.6 percent from fiscal year 2000. This substantial investment should provide opportunities for increasing productivity and decreasing costs. For example, the public sector is increasingly turning to the Internet to conduct paperless acquisitions, provide interactive electronic services to the public, and tailor or personalize information. As we testified in July, there are over 1,300 electronic government initiatives throughout the federal government, covering a wide range of activities involving interaction with citizens, business, other governments, and government employees. In addition, the Government Paperwork Elimination Act (GPEA) of 1998 requires that by October 21, 2003, federal agencies provide the public (when practicable) the option of submitting, maintaining, and disclosing required information electronically. We have found that agencies plan to provide an electronic option for 3,048 eligible activities by the GPEA deadline. almost double between 1998 and 2008, and the demand for computer programmers will increase by 30 percent during the same time period. In April 2001, the Information Technology Association of America (ITAA) released a study on the size of the private-sector IT workforce, the demand for qualified workers, and the gap between the supply and demand.Among the study’s findings were the following: Information technology employment directly accounts for approximately 7 percent of the nation’s total workforce. Over 10.4 million people in the United States are IT workers, an increase of 4 percent over the 10 million reported for last year. Overall estimated demand for IT workers is down from last year’s forecast (by 44 percent), partially because of the slowdown in the high-tech sector and the economy in general. However, the demand for IT workers remains high, as employers attempt to fill over 900,000 new IT jobs in 2001. Hiring managers reported an anticipated shortfall of 425,000 IT workers in 2001 because of a lack of applicants with the requisite technical and nontechnical skills. The ITAA also reported that despite softening in overall demand, skills in technical support, database development/administration, programming/ software engineering, web development/administration, and network design/administration remain most in demand by IT and non-IT companies alike. These positions represent nearly 86 percent of the demand for IT workers expected in 2001. The study further notes that the demand for enterprise systems professionals and network designers and administrators is expected to increase by 62 and 13 percent, respectively, over the 2000 forecast. For the IT workforce in particular, agencies are beginning to take action by initiating strategies and plans to attract, retain, and/or train skilled workers. Nevertheless, much remains to be done, as agencies generally lack comprehensive strategies for IT human capital management. To date, we have issued several products on IT human capital management, including studies of practices at four agencies: the Small Business Administration, the United States Coast Guard, the Social Security Administration, and the Centers for Medicare and Medicaid Services. These evaluations focused on agency practices needed to maintain and enhance the capabilities of IT staff. These practices fall in four key areas: Requirements—assessing the knowledge and skills needed to effectively perform IT operations to support agency mission and goals Inventory—determining the knowledge and skills of current IT staff so that gaps in needed capabilities can be identified Workforce strategies and plans—developing strategies and implementing plans for hiring, training, and professional development to fill the gap between requirements and current staffing Progress evaluation—evaluating progress made in improving IT human capital capability, and using the results of these evaluations to continuously improve the organization’s human capital strategies In July, we reported that agencies’ progress in addressing IT human capital strategies had been sluggish. Specifically, we stated that although agencies were initiating strategies and plans to attract, retain, and/or train a skilled IT workforce, key issues in each of the four areas mentioned above were not being effectively addressed. In the area of requirements, several of the agencies we reviewed had begun evaluating their short and longer term IT needs. However, none of them had completed these efforts. For instance, we found that the Small Business Administration did not have any policies or procedures to identify requirements for IT skills. Also, although the U.S. Coast Guard had conducted an assessment of the knowledge and skills needed by its IT officers and enlisted personnel, it had not done so for its civilian workforce. Although an IT inventory identifying the knowledge and skills of current staff is essential to uncovering gaps between current staff and requirements, our work to date has revealed that none of the reviewed agencies had a complete knowledge and skills inventory. For example, the U.S. Coast Guard and the Small Business Administration maintained a limited amount of information on IT knowledge and skills, and the Social Security Administration lacked an IT-specific knowledge and skills inventory. In addition to establishing requirements and creating an inventory, an agency needs to develop a workforce plan that is linked to its strategic and program planning efforts. The workforce plan should identify its current and future human capital needs, including the size of the workforce, its deployment across the organization, and the knowledge, skills, and abilities needed for the agency to pursue its shared vision. The workforce planning strategy should specifically outline the steps and processes that an agency should follow when hiring, training, and professionally developing staff to fill the gap between requirements and current staffing. Among the four agencies we reviewed, none had developed comprehensive IT-specific workforce strategies or plans. For example, although the Social Security Administration did have a broad workforce transition plan that includes actions to improve its processes (for projecting workforce needs, for recruiting, and for training and developing employees), these actions were not specific to IT staff. Finally, meaningful progress evaluation systems are necessary to determine whether agency human capital efforts are effective and to ensure that the results of these evaluations are used to make improvements. While agencies we reviewed did track various human capital efforts, such as progress in filling IT positions, none of the agencies had fully analyzed or reported on the effectiveness of their workforce strategies and plans. Shortcomings in IT human capital management have serious ramifications. Without complete assessments of requirements, agencies will lack assurance that they have identified the number of staff and the specific knowledge and skills needed or that they have developed strategies to fill these needs. Also, without an inventory of knowledge and skills, agencies will not have assurance that they are optimizing the use of their current IT workforce, nor will they have data on the extent of skill gaps. This information is necessary for developing effective workforce strategies and plans. If they cannot analyze and document the effectiveness of workforce strategies and plans, senior decisionmakers will lack assurance that they are effectively addressing knowledge and skill gaps. Judging from trends, the shortage of qualified IT professionals is likely to lead to greater reliance on contracted workers, so that agencies can supplement their existing workforces with external expertise. Indeed, from fiscal year 1990 to 2000, federal spending on contracted IT services increased from $3.7 billion to about $13.4 billion. Relying on contracting to fill workforce gaps is not a panacea. We have previously reported that some procurements of services are not being done efficiently, putting taxpayer dollars at risk. In particular, agencies were not clearly defining their requirements, fully considering alternative solutions, performing vigorous price analyses, or adequately overseeing contractor performance. Also, agencies appear to be at risk of not having enough of the right people with the right skills to manage service procurements. Following a decade of downsizing and curtailed investments in human capital, federal agencies currently face skills, knowledge, and experience imbalances that, without corrective action, will worsen, especially in light of the numbers of federal workers becoming eligible to retire in the coming years. Consequently, a key question we face in the federal government is whether we have today, or will have tomorrow, the ability to acquire and manage the procurement of the increasingly sophisticated services that the government needs. Department of Defense (DOD) found that its workforce reductions had led to a serious shortfall in the acquisition workforce. To improve the efficiency of contracting operations—and in part to help offset the effects of this shortfall—the department instituted streamlined acquisition procedures. However, the DOD Inspector General reported that the efficiency gains from the streamlined procedures had not kept pace with acquisition workforce reductions. The Inspector General reported that while the workforce had been reduced by half, DOD’s contracting workload had increased by about 12 percent and that senior personnel at 14 acquisition organizations believed that workforce reductions had led to such problems as less contractor oversight. Unless these reductions in the acquisition workforce are addressed, they could undermine the government’s ability to efficiently acquire contract services, including IT services. The challenges facing the government in maintaining a high-quality IT workforce are long-standing and widely recognized. As far back as 1994, our study of leading organizations revealed that strengthening the skills of IT professionals is a critical aspect of strategic information management.Specifically, leading organizations identify existing IT skills and needed future skills, as well as determining the right skill mix. Accordingly, we suggested that executives should systematically identify IT skill gaps and targets and integrate skill requirements into performance evaluations. In our more recent study of public and private sector efforts to build effective Chief Information Officer (CIO) organizations, we found that leading organizations develop IT human capital strategies to assess their skill bases and recruit and retain staff who can effectively implement information technology to meet business needs. capital challenges and suggest possible solutions, the CIO Council asked the National Academy of Public Administration (NAPA) to study IT compensation strategies and to make recommendations on how the government can best compete for IT talent. NAPA’s resulting study noted a number of problems inherent in the federal government’s human resource management system. These problems included a pay gap with the private sector and a compensation system that is overly focused on internal equity. The study commented that current pay disparities with the private sector, overly narrow pay ranges, and the inadequacy of special pay rates hinder the government’s ability to compete for IT workers. Regarding compensation, the study noted that the current system is closely aligned with internal equity by law, regulation, and practice, with little real attention paid to external equity and contribution equity. According to the study, private sector organizations typically consider and establish a strategic balance among internal, external, and contribution equity in determining pay rates and reward structures. NAPA’s study also includes an evaluation of two alternative compensation models. The first of these makes limited changes to the current General Schedule (GS) system, such as eliminating steps and combining some grades. The second is a market-based model that introduces more comprehensive reforms, including increased emphasis on performance and competencies. NAPA’s study concluded that the second model embodies the best approach to human capital reform. NAPA’s recommendations are shown in table 1. Human Capital: Attracting and Retaining a High-Quality Information Technology Workforce NAPA recommendation Establish a market-based, pay- for-performance compensation system. Allow for flexibility in the treatment of individuals and occupations. Improve recruiting and hiring processes. Balance the three dimensions of equity. Offer competitive benefits. Promote work/life balance programs. Encourage management ownership. Support technical currency and continuous learning. Build in reliability, clarity, and transparency. Description This compensation approach would establish broad pay ranges, tie base pay to market rates, and link increases in pay to competencies and results to attract and retain IT talent. The new compensation system would ensure that managers have the flexibility to pay individual workers for their respective skills and competencies as well as their contributions to the organization. The new compensation system for IT professionals needs to be linked to faster, enhanced recruitment and hiring processes. A new federal IT compensation system would provide a better balance among internal equity (that is, equity among government jobs), external equity (between jobs in the government and in other sectors), and contribution equity (among individual employees). The new system would offer a more competitive benefits package for senior technical employees as well as executives. Federal managers and human resources specialists must actively market work/life benefits and programs so that potential IT workers are aware of them. Managers must (1) actively participate in the design and implementation of agency-specific features of the new system, (2) be rewarded for effectively implementing and managing the system, and (3) be held accountable for not carrying out their management responsibilities. Agency management should design and support developmental activities such as formalized training, on-the-job training, computer-assisted learning, self- instructional guides, coaching, and other approaches. Agency budgets and management decisions must support full implementation of the new system. The new system must be reliable, meaning that it consistently conforms with policies so that the same set of circumstances always leads to a fair decision and result. Although we have not analyzed all aspects of these recommendations, many of them are consistent with suggestions we have made in prior testimonies, as well as with the practices that we have instituted in our own internal human capital management. For example, we have suggested that government pay systems should be based on performance and contributions rather than on longevity. Similarly, in our own human capital management at GAO, we have implemented pay for performance and are developing a competency-based evaluation system. We have also suggested that government employers use more flexible approaches to setting pay; in our own human capital management system, we have instituted broad pay bands for mission staff. More examples are given in table 2, which compares the NAPA recommendations with related suggestions we have made in previous work and practices we have adopted within GAO. As noted in table 2, we have identified and made use of a variety of tools and flexibilities to address our human capital challenges; some of these were made available to us through the GAO Personnel Act of 1980 and some through legislation passed by the Congress in 2000, but most are available to all federal agencies. Figure 1 shows those flexibilities that were made available to us through legislation. Regarding our own IT and other technical staff, we have taken a number of steps to address our workforce needs, including the following: Using a 25-percent pay differential (equal to the OPM pay differential for executive branch IT hires) to bring aboard entry-level technical staff for our IT team. Offering pay bonuses in attracting and retaining workers for hard-to-fill positions, such as IT positions requiring specific technical skills. Making wide use of contractor resources in the IT area to supplement both the numbers and skills of government employees. Currently, about 60 percent of the staff supporting GAO internal IT operations and initiatives are contractor staff. Given staffing constraints and market conditions, we have found this arrangement to work very well. We focus our training of in-house staff on project management, contract management, and technical training to ensure sound project management and oversight of the contractors. Using contractor resources has given us the ability to quickly bring on staff with the IT skills needed to carry out new projects/initiatives. other specialists—such as our Chief Statistician and Chief Accountant— with new titles and SES-equivalent benefits. We believe that three of the authorities provided in our 2000 legislation may be appropriate to other agencies and are worth congressional consideration at this time. Authority to offer voluntary early retirement and voluntary separation incentives could give agencies additional flexibilities with which to realign their workforces, correct skills imbalances, and reduce high-grade, managerial, or supervisory positions without reducing their overall number of employees. Further, the authority to establish Senior Level positions could help agencies become more competitive in the job market, particularly in critical scientific, technical, or professional areas, such as IT. Implementing reforms in human capital management will present significant challenges. Among most difficult will be (1) the sustained commitment demanded from the executive and legislative branch leaders, including agencies, the Office of Management and Budget (OMB), the Office of Personnel Management (OPM), and Congress, and (2) the cultural transformation that will be required by a new approach to human capital management. In its report on IT human capital, NAPA also recognizes the importance of these two factors. The report identifies a number of steps that would be required for implementation of a new system (see figure 2). Among these, NAPA includes the need to promote leadership by identifying champions for the new system within agencies. Further, NAPA acknowledges in its discussion that implementing its recommendations will challenge the existing culture of many agencies, and it recommends change management and training efforts for both managers and employees. We agree that the steps that NAPA describes are essential elements for an effective implementation strategy. We have, for example, identified six elements that our work suggests are particularly important in implementing and sustaining management improvements that actually resolve the problems they address. These elements are (1) a demonstrated leadership commitment and accountability for change; (2) the integration of management improvement initiatives into programmatic decisionmaking; (3) thoughtful and rigorous planning to guide decisions, particularly to address human capital and information technology issues; (4) employee involvement to elicit ideas and build commitment and accountability; (5) transforming organizational culture and aligning organizations to streamline operations and clarify accountability; and (6) strong and continuing congressional involvement. I would like to particularly highlight leadership and transforming organizational culture, two of the key elements in implementing such reforms. The sustained commitment of leaders within the executive and legislative branches is essential to the success of any implementation. The key players in the human capital area—agency leaders, OPM, OMB, and the Congress—all need to be actively involved in leading and creating change. compares with these needs, and developing effective strategies to fill the gaps. A useful tool for assessing overall human capital management is GAO’s human capital framework, which identifies a number of human capital elements and underlying values that are common to high-performing organizations. As our framework makes apparent, agencies must address a range of interrelated elements to ensure that their human capital approaches effectively support mission accomplishment. Although no single recipe exists for successful human capital management, high- performing organizations recognize that all human capital policies, practices, and investments must be designed, implemented, and assessed by the standard of how well they support the organization’s vision of what it is and where it wants to go. and has launched a Web site on workforce planning issues to facilitate information sharing. Further, OPM recently revised the SES performance management regulations so that in evaluating executive performance, agencies will use a balanced scorecard of customer satisfaction, employee perspectives, and organizational results. In prior testimony, we have pointed out that OPM could make substantial additional contributions by taking advantage of its ability to facilitate information-sharing on best practices among human capital managers throughout the federal government. In short, OPM should continue to move from “rules to tools”; its most valuable contributions will come less from traditional compliance activities than from its initiatives as a strategic partner to the agencies. Like OPM, OMB has increased its efforts to promote strategic human capital management. OMB’s role in setting governmentwide management priorities and defining resource allocations is critical to encouraging agencies to integrate strategic human capital management into their core business processes. In this role, OMB recently released the FY2002 President’s Management Agenda, which provides the President’s strategy for improving the management and performance of the federal government. The report identifies strategic management of human capital as an area for governmentwide improvement. In line with suggestions we have made, OMB is expecting agencies to take full advantage of existing authorities to better acquire and develop a high quality IT workforce. OMB also wants agencies to redistribute staff to front-line service delivery and reduce the number of organizational layers as they make better use of IT systems capabilities and knowledge sharing. Also, OMB’s Circular No. A-11 guidance on preparing annual performance plans states that agencies’ fiscal year 2002 annual performance plans should set goals in such areas as recruitment, retention, training, appraisals linked to program performance, workforce diversity, streamlining, and family-friendly programs. benchmarking and best practices efforts within the executive branch and greater attention during resource allocation to the links between agency missions and the human capital needed to pursue them. We have previously noted that leadership on the part of Congress will be critical if governmentwide improvements in strategic human capital management are to occur. To raise the visibility of the human capital issue and move toward a consensus on legislative reforms, both parties in both houses of Congress must stress commitment to people as an urgent federal management concern. Among the most encouraging developments in this regard have been the efforts of this Subcommittee to draw attention to human capital issues. Congress has opportunities available through its confirmation, oversight and appropriations, and legislative roles to ensure that agencies recognize their responsibilities and have the needed tools to manage their people for results. For example, Congress can draw wider attention to the critical role of human capital in the confirmation process, during which the Senate can make clear its commitment to sound federal management and explore what prospective nominees plan to do to ensure that their agencies recognize and enhance the value of their people. As part of the oversight and appropriations processes, Congress can examine whether agencies are effectively managing their human capital programs. It can also encourage more agencies to use the flexibilities available to them under current law and to reexamine their approaches to strategic human capital management in the context of their individual missions, goals, and other organizational needs. needs, the constraints under which they presently operate, and the flexibilities available to them. For example, before we submitted human capital legislative proposals for GAO last year, we made sure not only to identify in our own minds the human capital flexibilities that we needed, but also to give Congress a clear indication of our needs, our rationale, and the steps we were taking to maximize benefits and manage risks. Ultimately, Congress may wish to consider comprehensive legislative reform in the human capital area to give agencies the tools and reasonable flexibilities they need to manage effectively while retaining appropriate safeguards. As part of this effort, Congress may also wish to consider the extent to which traditional “civil service” approaches—structures, oversight mechanisms, rules and regulations, and direction-setting—make sense for a government that is largely a knowledge-based enterprise that has adopted and is now implementing modern performance management principles. Another critical challenge for implementing any reform is addressing needed changes in prevailing organizational cultures. As we have noted in previous testimony, a cultural transformation will be key for a successful transition to a new approach to human capital management. A culture of hierarchical management approaches will need to yield to one of partnerial approaches; process-oriented ways of doing business will need to yield to results-oriented ones; and organization “silos” will need to become integrated. Although government organizations have often proven to be slow to make these kinds of cultural changes, agencies that expect to make the best use of their human capital will need to create a culture that strongly emphasizes performance and supports employees in accomplishing their missions. Such a culture will include appropriate performance measures and rewards and a focus on continuous learning and knowledge management. performance shortfalls. Cultural issues have also been linked to long- standing security problems at Department of Energy weapons laboratories, and to intractable waste, fraud, abuse, and mismanagement problems in the Social Security Administration’s high-risk Supplemental Security Income program. Overcoming such problems requires overcoming the barriers that result from an entrenched organizational culture. innovation and flexibility in human capital management, as well as continual efforts to capture data not only on employee performance, but also on the effectiveness of agencies’ efforts at human capital management. In summary, Mr. Chairman, designing, implementing, and maintaining effective human capital strategies for all federal workers, but particularly for the IT workforce, will be critical to achieving the goals of maximizing the performance and ensuring the accountability of the federal government. In a performance management environment where federal agencies are held accountable for delivering improvements in program performance, the “people dimension” is of paramount importance. Overcoming human capital management challenges will determine how successfully the federal government can build, prepare, and manage its workforce. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the Subcommittee may have at this time. For further information regarding this testimony, please contact me at (202) 512-6240 or by email at [email protected]. Individuals making key contributions to this testimony included Barbara Collier and Margaret Davis.
Federal agencies face few tasks more critical than attracting, retaining, and motivating people. As our society has moved from the industrial age to the knowledge age, the success or failure of federal agencies can depend on having the right number of people with the right mix of knowledge and skills. This is especially true in the information technology (IT) area, where widespread shortfalls in human capital have undermined agency and program performance. This report discusses strategic human capital management as a high-risk area, summarizes agencies progress in addressing IT human capital needs, compares suggestions GAO made in earlier testimonies and those made in a recent report by the National Academy of Public Administration, and highlights important challenges to implementing IT human capital reform proposals.
VA operates one of the nation’s largest health care systems, spending about $26.5 billion a year to provide care to approximately 5.2 million veterans who receive health care through 158 VA medical centers (VAMC) and almost 900 outpatient clinics nationwide. DOD spends about $26.7 billion on health care for over 8.9 million beneficiaries, including active duty personnel and retirees, and their dependents. Most DOD health care is provided at more than 530 Army, Navy, and Air Force military treatment facilities (MTF) worldwide, supplemented by civilian providers. To encourage sharing of federal health resources between VA and DOD, in 1982, Congress passed the Sharing Act. Previously, VA and DOD health care facilities, many of which are collocated or in close geographic proximity, operated virtually independent of each other. The Sharing Act authorizes VAMCs and MTFs to become partners and enter into sharing agreements to buy, sell, and barter medical and support services. The head of each VA and DOD medical facility can enter into local sharing agreements. However, VA and DOD headquarters officials review and approve agreements that involve national commitments such as joint purchasing of pharmaceuticals. Agreements can be valid for up to 5 years. The intent of the law was not only to remove legal barriers, but also to encourage VA and DOD to engage in health resource sharing to more effectively and efficiently use federal health resources. VA and DOD sharing activities fall into three categories: Local sharing agreements allow VA and DOD to take advantage of their capacities to provide health care by being a provider of health services, a receiver of health services, or both. Health services shared under these agreements can include inpatient and outpatient care; ancillary services, such as diagnostic and therapeutic radiology; dental care; and specialty care services such as service for the treatment of spinal cord injury. Other services shared under these agreements include support services such as administration and management, research, education and training, patient transportation, and laundry. The goals of local sharing agreements are to allow VAMCs and MTFs to exchange health services in order to maximize their use of resources and provide beneficiaries with greater access to care. Joint venture sharing agreements, as distinguished from local sharing agreements, aim to avoid costs by pooling resources to build a new facility or jointly use an existing facility. Joint ventures require more cooperation and flexibility than local agreements because two separate health care systems must develop multiple sharing agreements that allow them to operate as one system at one location. National sharing initiatives are designed to achieve greater efficiencies, that is, lower cost and better access to goods and services when they are acquired on a national level rather than by individual facilities—for example, VA and DOD’s efforts to jointly purchase pharmaceuticals for nationwide distribution. VA and DOD are realizing benefits from sharing activities, specifically greater access to care, reduced federal costs, and better facility utilization at the 16 sites we reviewed. While all 16 sites were engaged in health resource sharing activities, some sites share significantly more resources than others. In 1994 VA and DOD opened a joint venture hospital in Las Vegas, Nevada, to provide services to VA and DOD beneficiaries. The joint venture improved access for VA beneficiaries by providing an alternative source for care other than traveling to VA facilities in Southern California. It also improved access to specialized providers for DOD beneficiaries. Examples of the types of services provided include vascular surgery, plastic surgery, cardiology, pulmonary, psychiatry, ophthalmology, urology, computed tomography scan, magnetic resonance imaging (MRI); nuclear medicine, emergency medicine and emergency room, and respiratory therapy. The site is currently in the process of enlarging the emergency room. In Pensacola, Florida, under a sharing agreement entered into in 2000, VA buys most of its inpatient services from Naval Hospital Pensacola. Through this agreement VA is able to utilize Navy facilities and reduce its reliance on civilian providers, thus lowering its purchased care cost by about $385,000 annually. Further, according to a VA official, the agreement has allowed VA to modify its plans to build a new hospital and instead build a clinic at significantly reduced cost to meet increasing veteran demand for health care services. Using VA’s cost per square foot estimates for hospital and clinic construction, the agency estimates that it will cost $45 million to build a new clinic compared to $100 million for a hospital. In Louisville, Kentucky, since 1996, VA and the Army have been engaged in sharing activities to provide services to beneficiaries that include primary care, audiology, radiology, podiatry, urology, internal medicine, and ophthalmology. For fiscal year 2003, a local VA official estimated that VA reduced its cost by $1.7 million as compared to acquiring the same services in the private sector through its agreements with the Army; he also estimated that the Army reduced its cost by about $1.25 million as compared to acquiring the same services in the private sector. As an example of the site’s efforts to improve access to care and reduce costs, in 2003 VA and DOD jointly leased a MRI unit. The unit reduces the need for VA and DOD beneficiaries to travel to more distant sources of care. A Louisville VA official stated that the purchase reduced the cost by 20 percent as compared to acquiring the same services in the private sector. In San Antonio, Texas, VA and the Air Force share a blood bank. Under a 1991 sharing agreement, VA provides the staff to operate the blood bank and the Air Force provides the space and equipment. According to VA, the blood bank agreement saves VA and DOD about $400,000 per year. Further, VA entered into a laundry service agreement with Brooke Army Medical Center in 2002 to utilize some of VA’s excess laundry capacity. Under the contract VA processes 1.7 million pounds of laundry each year for the Army at an annual cost of $875,000. Sites such as Las Vegas, Nevada; Pensacola, Florida; Louisville, Kentucky; and San Antonio, Texas shared significant resources compared to sites at Los Angeles, California and Charleston, South Carolina. For example, the sharing agreement at Los Angeles provided for the use of a nurse practitioner to assist with primary care and the sharing of a psychiatrist and a psychologist. See appendix II for the VA and DOD partners at each of the 16 sites and examples of the sharing activities taking place. The primary obstacle cited by officials at 14 of 16 sites we interviewed was the inability of computer systems to communicate and share patient health information between departments. Furthermore, local VA and DOD officials involved with sharing activities raised a concern that security check-in procedures implemented since September 11, 2001, have increased the time it takes to gain entry to medical facilities located on military installations during periods of heightened security. VA’s and DOD’s patient record systems cannot share patient health information electronically. The inability of VA’s and DOD’s patient record systems to quickly and readily share information on the health care provided at medical facilities is a significant obstacle to sharing activities. One critical challenge to successfully sharing information will be to standardize the data elements of each department’s health records. While standards for laboratory results were adopted in 2003, VA and DOD face a significant undertaking to standardize the remaining health data. According to the joint strategy that VA and DOD have developed, VA will have to migrate over 150 variations of clinical and demographic data to one standard, and DOD will have to migrate over 100 variations of clinical data to one standard. The inability of VA and DOD computer systems to share information forces the medical facilities involved in treating both agencies’ patient populations to expend staff resources to maintain patient records in both systems. For example, at Travis Air Force Base, both patient records systems have been loaded on to a single workstation in each department, so that nurses and physicians can enter patient encounter data into both systems. However, the user must access and enter data into each system separately. In addition to VA and DOD officials’ concerns about the added costs in terms of staff time, this method of sharing medical information raises the potential for errors—including double entry and transcription— possibly compromising medical data integrity. VA and DOD have been working since 1998 to modify their computer systems to ensure that patient health information can be shared between the two departments. In May 2004, we reported that they have accomplished a one-way transfer of limited health data from DOD to VA for separated service members. Through the transfer, health care data for separated service members are available to all VA medical facilities. This transfer gives VA clinicians the ability to access and display health care data through VA’s computerized patient record system remote data views about 6 weeks after the service member’s separation. The health care data include laboratory, pharmacy, and radiology records, and are available for approximately 1.8 million personnel who separated from the military between 1987 and June 2003. A second phase of the one-way transfer, completed in September 2003, added to the base of health information available to VA clinicians by including discharge summaries, allergy information, admissions information, and consultation results. VA and DOD are developing a two-way transfer of health information for patients who obtain care from both systems. Patients involved include those who receive care and maintain health records at multiple VA or DOD medical facilities within and outside the United States. Upon viewing the medical record, a VA clinician would be provided access to clinical information on the patient residing in DOD’s computerized health record systems. In the same manner, when a veteran seeks medical care at an MTF, the attending DOD clinician would be provided access to the veteran’s health information existing in VA’s computerized health record systems. In May 2004, we reported that VA’s and DOD’s approach to achieving the two-way transfer of health information lacks a solid foundation and that the departments have made little progress toward defining how they intend to accomplish it. In March 2004 and June 2004, we also reported that VA and DOD have not fully established a project management structure to ensure the necessary day-to-day guidance of and accountability for the undertaking, adding to the challenge and uncertainties of developing two-way information exchange. Further, we reported that the departments were operating without a project management plan that describes their specific development, testing, and deployment responsibilities. These issues cause us to question whether the departments will meet their 2005 target date for two-way patient health information exchange. During times of heightened security since September 11, 2001, according to VA and DOD officials, screening procedures have slowed entry for VA beneficiaries, and particularly for family members who accompany them, to facilities located on Air Force, Army, and Navy installations. For example, instead of driving onto Nellis Air Force Base in Las Vegas and parking at the medical facility, veterans seeking treatment there must park outside the base perimeter, undergo a security screening, and wait for shuttle services to take them to the hospital for care. Although sharing occurs in North Carolina between the Fayetteville VA Medical Center and the Womack Army Medical Center, Ft. Bragg, the VA hospital administrator expressed concerns regarding any future plans to build a joint VA and DOD clinic at Ft. Bragg due to security precautions— identity checks and automobile searches—that VA beneficiaries encounter when attempting to access care. Consequently, the administrator prefers that any new clinics be located on VA property for ease of access for all beneficiaries. VA provided an example of how it and DOD are working to help resolve these problems. In Pensacola, Florida, VA is building a joint ambulatory care clinic on Navy property through a land-use arrangement. According to VA, veterans’ access to the clinic will be made easier. A security fence will be built around the building site on shared VA and Navy boundaries and a separate entrance and access road to a public highway will allow direct entry. Special security arrangements will be necessary only for those veterans who are referred for services at the Navy medical treatment facility. Veterans who come to the clinic for routine care will experience the same security measures as at any other VA clinic or medical center. VA believes this arrangement gives it optimal operational control and facilitates veterans’ access while addressing DOD security concerns. We requested comments on a draft of this report from VA and DOD. Both agencies provided written comments that are found in appendix III. VA and DOD generally agreed with our findings. They also provided technical comments that we incorporated where appropriate. In commenting on this draft, VA stated that VA and DOD are developing an electronic interface that will support a bidirectional sharing of health data. This approach is set forth in the Joint VA/DOD Electronic Health Records Plan. According to VA, the plan provides for a documented strategy for the departments to achieve interoperable health systems in 2005. It included the development of a health information infrastructure and architecture, supported by common data, communications, security, software standards, and high-performance health information. VA believes these actions will achieve the two-way transfer of health information and communication between VA’s and DOD’s information systems. In their comments, DOD acknowledged the importance of VA and DOD developing computer systems that can share patient record information electronically. According to DOD, VA and DOD are taking steps to improve the electronic exchange of information. For example, VA and DOD have implemented a joint project management structure for information management and information technology initiatives—which includes a single Program Manager and a single Deputy Program Manager with joint accountability and day-to-day responsibility for project implementation. Further, VA and DOD continue to play key roles as lead partners to establish federal health information interoperability standards as the basis for electronic health data transfer. We recognize that VA and DOD are taking actions to implement the Joint VA/DOD Electronic Health Records Plan and the joint project management structure, and that they face significant challenges to do so. Accomplishing these tasks is a critical step for the departments to achieve interoperable health systems by the end of 2005. DOD also agreed with the GAO findings on issues relating to veterans access to military treatment facilities located on Air Force, Army, and Navy installations during periods of heightened security. DOD stated that they are working diligently to solve these problems, but are unlikely to achieve an early resolution. They also stated that as VA and DOD plan for the future, they will consider this issue during the development of future sharing agreements and joint ventures. We are sending copies of this report to the Secretary of Veterans Affairs, the Secretary of Defense, interested congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, this report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-7101 or Michael T. Blair, Jr., at (404) 679-1944. Aditi Shah Archer and Michael Tropauer contributed to this report. This report describes the benefits that are being realized at 16 Department of Veterans Affairs (VA) and Department of Defense (DOD) sites that are engaged in health resource sharing activities. Nine of the sites were the focus of a February 2002 House Committee on Veterans’ Affairs report that described health resource sharing activities between VA and DOD. We selected seven other sites that actively participated in sharing activities to ensure representation from each service at locations throughout the nation. To obtain information on the resources that are being shared we analyzed agency documents and interviewed officials at VA and DOD headquarters offices and at VA and DOD field offices who manage sharing activities at the 16 sites. To gain information on the benefits of sharing and the problems that impede sharing at selected VA and DOD sites, we asked VA and DOD personnel at 16 sites to provide us with information on: shared services provided to beneficiaries including improvements or enhancements to delivery of health care to beneficiaries, reduction in costs, and their opinions on barriers or obstacles that exist either internally (within their own agency) or externally (with their partner service or agency). Ten sites provided information on estimated cost reductions. We reviewed the supporting documentation and obtained clarifying information from agency officials. Based on our review of the documentation and subsequent discussions with agency officials we accepted the estimates as reasonable. From the 16 sites, we judgmentally selected the following 6 sites to visit: 1) Fairfield, California; 2) Pensacola, Florida; 3) Louisville, Kentucky; 4) Fayetteville, North Carolina; 5) Las Vegas, Nevada; and 6) Charleston, South Carolina. At the sites we visited, we interviewed local VA and DOD officials to obtain their views on resource-sharing activities and obtained documents from them on the types of services that were being shared. The sites were selected based on the following criteria: 1) representation from each military service; 2) geographic location; and 3) type of sharing agreement—local sharing agreement, joint venture, or participant in a national sharing initiative. We conducted telephone interviews with agency officials at the 10 sites that we did not visit and requested supporting documentation from them to gain an understanding of the sharing activities underway at each site. We obtained and reviewed VA and DOD policies and regulations governing sharing agreements and reviewed our prior work and relevant reports issued by the DOD Inspector General and DOD contractors. Our work was performed from June 2003 through June 2004 in accordance with generally accepted government auditing standards. Partners: Alaska VA Healthcare System and 3rd Medical Group, Elmendorf Air Force Base The Department of Veterans Affairs (VA) and the Air Force have had a resource-sharing arrangement since 1992. Building upon that arrangement, in 1999, VA and the Air Force entered into a joint venture hospital. According to VA and Air Force officials, they have been able to efficiently and effectively provide services to both VA and the Department of Defense (DOD) beneficiaries in the Anchorage area that would not have been otherwise possible. The services to VA and DOD beneficiaries include emergency room, outpatient, and inpatient care. Other services the Air Force provides VA includes diagnostic radiology, clinical and anatomical pathology, nuclear medicine, and MRI. VA contributes approximately 60 staff toward the joint venture. VA staff are primarily responsible for operating the 10-bed intensive care unit (ICU). For fiscal year 2002, a DOD official estimated that the Air Force avoids costs of about $6.6 million by utilizing the ICU as compared to acquiring the same services in the private sector. Other VA staffing in the hospital lends support to the emergency department, medical and surgical unit, social work services, supply processing and distribution, and administration. Partners: VA Northern California Health Care System and 60th Medical Group, Travis Air Force Base In 1994, VA and the Air Force entered into a joint venture at Travis Air Force Base. Under this joint venture, VA contracts for inpatient care, radiation therapy, and other specialty, ancillary, and after-hours teleradiology services it need from the Air Force. In return, the Air Force contracts for ancillary and pharmacy support from VA. The most recent expansion of the joint venture in 2001 included activation of a VA clinic located adjacent to the Air Force hospital—this clinic includes a joint neurosurgery clinic. Each entity currently reimburses the other at 75 percent of the Civilian Health and Medical Program of the Uniformed Services (CHAMPUS) Maximum Allowable Charge (CMAC) rate. In March 2004, a VA official estimated that the VA saves about $500,000 per year by participating in the joint venture and an Air Force official estimated that the Air Force saves about $300,000 per year through the joint venture. Partners: Veterans Affairs Greater Los Angeles Healthcare System and 61st Medical Squadron, Los Angeles Air Force Base The Air Force contracts for mental health services from the Veterans Affairs medical center (VAMC). According to Air Force and VA local officials, there are two agreements in place; first, VA provides a psychologist and a psychiatrist who provide on-site services to DOD beneficiaries (one provider comes once a week, another provider comes 2 days a month). The total cost of this annual contract is about $200,000. According to the Air Force, it is paying 90 percent of the CMAC rate for these services and is thereby saving about $20,000 to $22,000 a year. Second, the Air Force is using a VA nurse practitioner to assist with primary care. The cost savings were not calculated but the Air Force stated that VA was able to provide this staffing at a significantly reduced cost as compared to contracting with the private sector. Partners: VA San Diego Healthcare System and Naval Medical Center San Diego VA provides graduate medical education, pathology and laboratory testing, and outpatient and ancillary services to the Navy. According to Navy officials, the sharing agreements resulted in a cost reduction of about $100,000 per year for fiscal years 2002 and 2003. As of June 2004, VA and the Navy were in the process of finalizing agreements for sharing radiation therapy, a blood bank, and mammography services. In fiscal year 2003, San Diego was selected as a pilot location for the VA/DOD Consolidated Mail Outpatient Pharmacy (CMOP) program. A naval official at San Diego considers the pilot a success at this location because participation was about 75 percent and it helped eliminate traffic, congestion, and parking problems associated with beneficiaries on the Navy’s medical campus who come on site for medication refills—an average of 350 patients per day. According to a DOD official, the CMOP pilot in San Diego will likely continue through fiscal year 2004. Partners: VA Miami Medical Center and Naval Hospital Jacksonville VA and the Navy have shared space and services since 1987. The Key West Clinic became a joint venture location in 2000. VA physically occupies 10 percent of the Navy clinic in Key West. The clinic is a primary care facility. However, the clinic provides psychiatry, internal medicine, and part-time physical therapy. According to Navy officials, there are two VA physicians on call at the clinic and seven Navy physicians. The Navy’s physicians examine VA patients when needed, and the Navy bills the VA at 90 percent of CMAC. Further, VA reimburses the Navy 10 percent of the total cost for housekeeping and utilities. VA and the Navy share laboratory and pathology, radiology, optometry, and pharmacy services. The VA reimburses the Navy $4 for the packaging and dispensing of each prescription. Partners: VA Gulf Coast Veterans Health Care System and Naval Hospital Pensacola Since 2000, the Navy has provided services to VA beneficiaries at its hospital through sharing agreements that include emergency room services, obstetrics, pharmacy services, inpatient care, urology, and diagnostic services. In turn, VA provides mental health and laundry services to Navy beneficiaries. In fiscal year 2002, the Naval Hospital Pensacola met about 88 percent of VA’s inpatient needs. The Navy provided 163 emergency room visits, 112 outpatient visits, and 8 surgical procedures for orthopedic services to VA beneficiaries. Through this agreement VA has reduced its reliance on civilian providers, thus lowering its purchased care cost by about $385,000 annually. Further, according to a VA official, the agreement has allowed VA to modify its plans to build a new hospital to meet increasing veteran demand for health care services. Rather than build a new hospital VA intends to build a clinic to meet outpatient needs. Using VA’s cost per square foot estimates for hospital and clinic construction, the agency estimates that it will cost $45 million to build a new clinic compared to $100 million for a new hospital. Partners: VA Pacific Islands Healthcare System and Tripler Army Medical Center VA and the Army entered into a joint venture in 1991. According to VA and Army officials, over $50 million were saved in construction costs when VA built a clinic adjacent to the existing Army hospital. According to a VA official, the Army hospital is the primary facility for care for most VA and Army beneficiaries. The Army provides VA beneficiaries with access to the following services: inpatient care, intensive care, emergency room, chemotherapy, radiology, laboratory, dental, education and training for physicians, and nurses. Also, as part of the joint venture agreement, VA physicians are assigned to the Army hospital to provide care to VA patients. VA and the Army provided services to about 18,000 VA beneficiaries in 2003. According to an Army official, the joint venture as a whole provides no savings to the Army. The benefit to the Army is assured access for its providers to clinical cases necessary for maintenance of clinical skills and Graduate Medical Education through the reimbursed workload. Partners: North Chicago VA Medical Center and Naval Hospital Great Lakes VA provides inpatient psychiatry and intensive care, and outpatient clinic visits, for example, pulmonary care, neurology, gastrointestinal care, diabetic care, occupational and physical therapy, speech therapy, rehabilitation, and diagnostic tests to Navy beneficiaries. VA also provides medical training to Naval corpsmen, nursing staff, and dental residents. The Navy provides selected surgical services for VA beneficiaries such as joint replacement surgeries and cataract surgeries. In addition, as available, the Navy provides selected outpatient services, mammograms, magnetic resonance imaging (MRI) examinations, and laboratory tests. The 2-year cost under this agreement from October 2001 through September 2003 is about $295,000 for VA and about $502,000 for the Navy. According to VA officials, VA and DOD pay each other 90 percent of the CMAC rate for these services. As a result, for the 2-year period VA and DOD reduced their costs by about $88,000 through this agreement, as opposed to contracting with the private sector for these services. VA officials also stated that other benefits were derived from these agreements, including sharing of pastoral care, pharmacy support, educational and training opportunities, imaging, and the collaboration of contracting and acquisition opportunities, all resulting in additional services being provided to patients at an overall reduced cost, plus more timely and convenient care. According to VA, in October 2003 the Navy transferred its acute inpatient mental health program to North Chicago VA medical center, where staff operate a 10-bed acute mental health ward, which has resulted in an estimated cost reduction of $323,000. This unit also included a 10-bed medical hold unit. Further, VA and the Navy are pursuing a joint venture opportunity planned for award in fiscal year 2004, which will integrate the medical and surgical inpatient programs. This will result in the construction of four new operating rooms and the integration of the acute outpatient evaluation units at VA. The Navy would continue to provide surgical procedures and related inpatient follow-up care for Navy patients at the VA facility. The joint venture would eliminate the need for the Navy to construct replacement inpatient beds as part of the Navy’s planned Great Lakes Naval hospital replacement facility. According to VA, this joint venture would result in an estimated cost reduction of about $4 million. Partners: VA Medical Center Louisville and Ireland Army Community Hospital, Ft. Knox Since 1996, in Louisville, Kentucky, VA and the Army have been engaged in sharing activities to provide services to beneficiaries that include primary care, acute care pharmacy, ambulatory, blood bank, intensive care, pathology and laboratory, audiology, podiatry, urology, internal medicine, and ophthalmology. For fiscal year 2003, a local VA official estimated that VA reduced its cost by $1.7 million as compared to acquiring the same services in the private sector through its agreements with the Army; he also estimated that the Army reduced its cost by about $1.25 million as compared to acquiring the same services in the private sector. As an example of the site’s efforts to improve access to care, in 2003 VA and DOD jointly leased an MRI unit. The unit eliminates the need for beneficiaries to travel to more distant sources of care. A Louisville VA official stated that the purchase reduced the cost by 20 percent as compared to acquiring the same services in the private sector. Partners: VA Southern Nevada Healthcare System and 99th Medical Group, Nellis Air Force Base In this joint venture, VA and the Air Force operate an integrated medical hospital. Prior to 1994, VA had no inpatient capabilities in Las Vegas. This required VA beneficiaries to travel to VA facilities in Southern California for their inpatient care. This joint venture also improved access to specialized providers for DOD beneficiaries. The following services are available at the joint venture: anesthesia, facility and acute care pharmacy, blood bank, general surgery, mental health, intensive care, mammography, obstetrics and gynecology, orthopedics, pathology and laboratory, vascular surgery, plastic surgery, cardiology, pulmonary, psychiatry, ophthalmology, urology, podiatry, computed tomography scan, MRI, nuclear medicine, emergency medicine and emergency room, and pulmonary and respiratory therapy. VA and Air Force officials estimate that the joint venture reduces their cost of health care delivery by over $15 million annually. Currently, the site is in the process of enlarging the hospital’s emergency room. According to a VA official, during periods of heightened security, veterans seeking treatment from the hospital at Nellis Air Force base in Las Vegas must park outside the base perimeter, undergo a security screening, and wait for shuttle services to take them to the hospital for care. Partners: New Mexico VA Health Care System and 377th Medical Group, Kirtland Air Force Base According to VA and Air Force officials, Albuquerque is the only joint venture site where VA provides the majority of health care to Air Force beneficiaries. The Air Force purchases all inpatient clinical care services from the VA. The Air Force also operates a facility, including a dental clinic adjacent to the hospital. According to an Air Force official, for fiscal year 2003 the Air Force avoided costs of about $1,278,000 for inpatient, outpatient, and ambulatory services needs. It also avoided costs of about $288,000 for emergency room and ancillary services. The Air Force official estimates that under the joint venture it has saved about 25 percent of what it would have paid in the private sector. Further, according to the Air Force official, additional benefits are derived from the joint venture that are important to beneficiaries such as: 1) continuity of care, 2) rapid turnaround through the referral process, 3) easier access to specialty providers, and 4) an overall increase in patient satisfaction. Additionally, both facilities individually provide women’s health (primary care, surgical, obstetrics and gynecology) to their beneficiaries. The Air Force official reported in March 2004 that they were evaluating how they can jointly provide these services. In fiscal year 2003 Kirkland Air Force Base was selected as a pilot location for the CMOP program. According to a DOD official, the CMOP pilot at Kirtland Air Force Base will likely continue through fiscal year 2004. Partners: Fayetteville VA Medical Center and Womack Army Medical Center, Fort Bragg According to a VA official, VA and Army shared resources include blood services, general surgery, pathology, urology, the sharing of one nuclear medicine physician, one psychiatrist, a dental residency program, and limited use by VA of an Army MRI unit. Partners: Ralph H. Johnson VA Medical Center and Naval Hospital Charleston According to Navy officials, with the downsizing of the Naval Hospital Charleston and transfer of its inpatient workload to Trident Health Care system (a private health care system), VA and the Navy no longer share inpatient services, except in cases where the Navy requires mental health inpatient services. However, in June 2004, VA has approved a minor construction joint outpatient project totaling $4.9 million (scheduled for funding in fiscal year 2006 with activation planned for fiscal year 2008). Design meetings are underway. Among the significant sharing opportunities for this new facility are laboratory, radiology, and specialty services. Partners: El Paso VA Health Care System and William Beaumont Army Medical Center, Fort Bliss In this joint venture, the VA contracts for emergency department services, specialty services consultation, inpatient services for medicine, surgery, psychiatric, and intensive care unit from the Army. The Army contracts for backup services from the VA including computerized tomography, and operating suite access. According to VA officials, the Army provides all general and vascular surgery services so that no veteran has to leave El Paso for these services. This eliminates the need for El Paso’s veterans to travel over 500 miles round-trip to obtain these surgical procedures from the Albuquerque VAMC—the veterans’ closest source of VA medical care. The Army provides these services at 90 percent of the CMAC rate or in some cases at an even lower rate. According to a VA official in June 2004, VA and the Army have agreed to proceed with a VA lease of the 7th floor of the William Beaumont Army Medical Center. VA would use the space to operate an inpatient psychiatry ward and a medical surgery ward. VA will staff both wards. In fiscal year 2004 El Paso was approved as a pilot location for testing a system that stores VA and DOD patient laboratory results electronically. Partners: South Texas Veterans Health Care System; Wilford Hall Medical Center, Lackland Air Force Base; and Brooke Army Medical Center, Fort Sam Houston As of March 2004, a VA official stated that VA and DOD have over 20 active agreements in place in San Antonio. Some of the sharing activities between VA and the Air Force include radiology, maternity, laboratory, general surgery, and a blood bank. Since 2001, VA staffs the blood bank and the Air Force provides the space and equipment—the blood bank provides services to VA and Air Force beneficiaries. According to VA, the blood bank agreement saves VA and DOD about $400,000 per year. Further, according to Air Force officials, as of June 2004 VA and the Air Force were negotiating to jointly operate the Air Force’s ICU. The Air Force would supply the acute beds and VA would provide the staff. This joint unit would provide services to both beneficiary populations. In addition, VA and Army agreements include the following areas of service: gynecology, sleep laboratory, radiology, and laundry. According to VA officials, VA entered into a laundry service agreement with Brooke Army Medical Center in 2002 to utilize some of VA’s excess laundry capacity. Under the contract VA processes about 1.7 million pounds of laundry each year for the Army at an annual cost of $875,000. Partners: VA Puget Sound Health Care System and Madigan Army Medical Center, Ft. Lewis As of June 2004, VA and the Army have two sharing agreements in place that encompass several shared services. For example, the Army provides VA beneficiaries with emergency room, inpatient, mammography, and cardiac services. The VA provides the Army with computer training services, laboratory testing, and radiology and gastrointestinal physician services on-site at Madigan. In addition, VA nursing and midlevel staff provide support to the Army inpatient medicine service. In turn, the Army provides 15 inpatient medicine beds for veterans. During fiscal year 2002, VA paid the Army $900 per ward day per patient for inpatient care and $1,720 per ICU day. During fiscal year 2002, there were 69 VA patients discharged, with 117 ward days and 101 ICU days, averaging $1,280 per day. According to VA officials, this agreement resulted in a cost reduction, in that to contract with private providers the average cost per day would have been $1,939. The cost reduction to VA was $143,752. The VA and Army jointly staff clinics for otolaryngology (1/2 day per week) and ophthalmology (3 half-day clinics per month). This agreement results in a cost reduction of about $25,000 per year to VA compared to contracting with the private sector. Other services such as mammography do not result in a cost reduction, but according to VA officials they provide their beneficiaries with another source for accessing care. Computer-Based Patient Records: VA and DOD Efforts to Exchange Health Data Could Benefit from Improved Planning and Project Management. GAO-04-687. Washington, D.C.: June 7, 2004. Computer-Based Patient Records: Improved Planning and Project Management Are Critical to Achieving Two-Way VA-DOD Health Data Exchange. GAO-04-811T. Washington, D.C.: May 19, 2004. Computer-Based Patient Records: Sound Planning and Project Management Are Needed to Achieve a Two-Way Exchange of VA and DOD Health Data. GAO-04-402T. Washington, D.C.: March 17, 2004. DOD and VA Health Care: Incentives Program for Sharing Resources. GAO-04-495R. Washington, D.C.: February 27, 2004. Veterans Affairs: Post-hearing Questions Regarding the Departments of Defense and Veterans Affairs Providing Seamless Health Care Coverage to Transitioning Veterans. GAO-04-294R. Washington, D.C.: November 25, 2003. Computer-Based Patient Records: Short-Term Progress Made, but Much Work Remains to Achieve a Two-Way Data Exchange Between VA and DOD Health Systems. GAO-04-271T. Washington, D.C.: November 19, 2003. DOD and VA Health Care: Access for Dual Eligible Beneficiaries. GAO- 03-904R. Washington, D.C.: June 13, 2003. VA and Defense Health Care: Increased Risk of Medication Errors for Shared Patients. GAO-02-1017. Washington, D.C.: September 27, 2002. VA and Defense Health Care: Potential Exists for Savings through Joint Purchasing of Medical and Surgical Supplies. GAO-02-872T. Washington, D.C.: June 26, 2002. DOD and VA Pharmacy: Progress and Remaining Challenges in Jointly Buying and Mailing Out Drugs. GAO-01-588. Washington, D.C.: May 25, 2001. Computer-Based Patient Records: Better Planning and Oversight By VA, DOD, and IHS Would Enhance Health Data Sharing. GAO-01-459. Washington, D.C.: April 30, 2001. VA and Defense Health Care: Evolving Health Care Systems Require Rethinking of Resource Sharing Strategies. GAO/HEHS-00-52. Washington, D.C.: May 17, 2000. VA and Defense Health Care: Rethinking of Resource Sharing Strategies is Needed. GAO/T-HEHS-00-117. Washington, D.C.: May 17, 2000. VA/DOD Health Care: Further Opportunities to Increase the Sharing of Medical Resources. GAO/HRD-88-51. Washington D.C.: March 1, 1988. Legislation Needed to Encourage Better Use of Federal Medical Resources and Remove Obstacles To Interagency Sharing. HRD-78-54. Washington D.C.: June 14, 1978. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
Congress has long encouraged the Department of Veterans Affairs (VA) and the Department of Defense (DOD) to share health resources to promote cost-effective use of health resources and efficient delivery of care. In February 2002, the House Committee on Veterans' Affairs described VA and DOD health care resource sharing activities at nine locations. GAO was asked to describe the health resource sharing activities that are occurring at these sites. GAO also examined seven other sites that actively participate in sharing activities. Specifically, GAO is reporting on (1) the types of benefits that have been realized from health resource sharing activities and (2) VA- and DOD-identified obstacles that impede health resource sharing. GAO analyzed agency documents and interviewed officials at DOD and VA to obtain information on the benefits achieved through sharing activities. The nine sites reviewed by the Committee and reexamined by GAO are: 1) Los Angeles, CA; 2) San Diego, CA; 3) North Chicago, IL; 4) Albuquerque, NM; 5) Las Vegas, NV; 6) Fayetteville, NC; 7) Charleston, SC; 8) El Paso, TX; and 9) San Antonio, TX. The seven additional sites GAO examined are: 1) Anchorage, AK; 2) Fairfield, CA; 3) Key West, FL; 4) Pensacola, FL; 5) Honolulu, HI; 6) Louisville, KY; and 7) Puget Sound, WA. In commenting on a draft of this report, the departments generally agreed with our findings. At the 16 sites GAO reviewed, VA and DOD are realizing benefits from sharing activities, specifically better facility utilization, greater access to care, and reduced federal costs. While all 16 sites are engaged in health resource sharing activities, some sites share significantly more resources than others. For example, at one site VA was able to utilize Navy facilities to provide additional sources of care and reduce its reliance on civilian providers, thus lowering its purchased care cost by about $385,000 annually. Also, because of the sharing activity taking place at this site, VA has modified its plans to build a new $100 million hospital and instead plans to build a clinic that will cost about $45 million. However, at another site the sharing activity was limited to the use of a nurse practitioner to assist with primary care and the sharing of a psychiatrist and a psychologist. GAO found that the primary obstacle cited by almost all of the agency officials interviewed was the inability of VA and DOD computer systems to communicate and exchange patient health information between departments. VA and DOD medical facilities involved in treating both agencies' patient populations must expend staff resources to enter information on the health care provided into the patient records in both systems. Local VA officials also expressed a concern that security screening procedures have increased the time it takes for VA beneficiaries and their families to gain entry to facilities located on Air Force, Army, and Navy installations during periods of heightened security.
Consumers can enroll in a PPACA qualified health plan offered through a marketplace, or change their previously selected qualified health plan, after the annual open enrollment period concludes if they qualify for an SEP. Under CMS regulations, a consumer may qualify for an SEP due to a specific triggering event, and generally would have up to 60 days after the event to select and enroll in a qualified health plan. Examples of qualifying events include but are not limited to losing minimum essential health coverage of the individual or his or her dependent; gaining a dependent or becoming a dependent through marriage, birth, adoption, placement for adoption, or placement in foster care, or through a child-support order or other court order; gaining access to new qualified health plans as a result of a not enrolling during the annual open enrollment period, or other enrollment period for which the consumer qualified, was unintentional, inadvertent, or erroneous and is the result of error, misrepresentation, misconduct, or inaction by the Marketplace or its agents; applying for Medicaid or the Children’s Health Insurance Program during the open enrollment period, or other enrollment period for which the consumer qualified, and being determined ineligible after the enrollment period ended; and demonstrating to the marketplace that the individual meets other exceptional circumstances as the marketplace may provide. While PPACA requires marketplaces to verify application information to determine eligibility for enrollment and income-based subsidies—such as verifying U.S. citizenship, nationality, or lawful presence—there is no specific legal requirement to verify the events that trigger an SEP. Specifically, there is no specific legal requirement that federal or state marketplaces (1) request documents to support an SEP triggering event or (2) authenticate the documents submitted to support an SEP event to determine whether those documents are fictitious. According to CMS officials and state officials, consumers that claim eligibility to enroll during an SEP must attest under penalty of perjury that they meet the conditions of eligibility for an SEP. In February 2016, however, CMS announced plans to begin requesting supporting documentation to verify certain events that would trigger an SEP. Specifically, CMS announced its intention to establish a Special Enrollment Confirmation Process in which consumers who enroll or change plans using an SEP through the federal Marketplace will be directed to provide documentation for any of the following triggering events: (1) loss of minimum essential coverage; (2) permanent move; (3) birth; (4) adoption, placement for adoption, placement for foster care or child support or other court order; or (5) marriage. According to the notice, CMS will provide consumers with lists of qualifying documents, such as a birth or marriage certificate. In June 2016, CMS announced that it would begin requesting some supporting documentation beginning on June 17, 2016. State-based marketplaces are not required to follow CMS’s Special Enrollment Confirmation Process, but states may choose to follow this guidance or establish their own processes, according to CMS officials. States may also choose to accept a consumer’s attestation of the SEP triggering event without further verification. For example, according to state officials from Covered California, the state-based marketplace accepts self-attestation and requests supporting documents for a random sample of eligible consumers for certain SEP triggering events. According to officials from the DC Health Benefit Exchange Authority, the state-based marketplace accepts self-attestation for three of the six SEP triggering events we tested. We have previously testified and reported on various aspects of PPACA enrollment controls as part of our ongoing work in this area. For example, in July 2014 we testified on our undercover attempts to obtain health-care coverage offered by the federal Marketplace for coverage- year 2014 using fictitious identities and false documentation. We were successful in 11 out of 12 attempts to do so. In October 2015, we testified on similar undercover testing for coverage-year 2015 where we were successful in 17 of 18 attempts. In February 2016 we issued a report addressing CMS enrollment controls and the agency’s management of enrollment-fraud risk. The February 2016 report included eight recommendations, which are discussed below, to strengthen CMS oversight of the Marketplace. In September 2016, we issued two reports and testified about addressing the potential vulnerabilities to fraud in the application, enrollment, and eligibility-verification controls of the federal Marketplace and selected state marketplaces for PPACA’s second and third open enrollment periods, for 2015 and 2016 coverage, respectively. In our February 2016 report, we recommended that the Secretary of Health and Human Services direct the Acting Administrator of CMS to: (1) conduct a feasibility study and create a written plan on actions that CMS can take to monitor and analyze the extent to which data hub queries provide requested or relevant applicant verification information; (2) track the value of enrollee subsidies that are terminated or adjusted for failure to resolve application inconsistencies, and use this information to inform assessments of program risks; (3) regarding cost-sharing subsidies that are terminated or adjusted for failure to resolve application inconsistencies, consider and document whether it would be feasible to create a mechanism to recapture those costs; (4) identify and implement procedures to resolve Social Security number inconsistencies where the Marketplace is unable to verify Social Security numbers or applicants do not provide them; (5) reevaluate CMS’s use of certain incarceration status data and determine to either use these data or accept applicant attestation on status in all cases; (6) create a written plan and schedule for providing Marketplace call center representatives with access to information on the current status of eligibility documents submitted to CMS’s documents processing contractor; (7) conduct a fraud-risk assessment, consistent with best practices described in GAO’s framework for managing fraud risks in federal programs, of the potential for fraud in the process of applying for qualified health plans through the federal Marketplace; and (8) fully document prior to implementation, and have readily available for inspection thereafter, any significant decision on qualified health-plan enrollment and eligibility matters, with such documentation to include details such as policy objectives, supporting analysis, scope, and expected costs and effects. In formal comments on a draft of the report, HHS concurred with our recommendations and outlined a number of steps it plans to take to implement them. In an April 2016 letter, HHS described a number of specific actions it had taken in response to our eight recommendations, such as creating an integrated project team to perform the Marketplace fraud-risk assessment. In May 2016, we requested that CMS provide detailed documentation and other evidence to help corroborate the various actions described in the HHS letter. As of November 2016, CMS’s response to this request was pending. Consequently, we consider all eight recommendations to remain open as of November 2016, pending corroborating information. We will continue to monitor HHS’s progress in implementing them. Implementing these recommendations as intended, such as performing the fraud-risk assessment, could help address some of the control vulnerabilities we identified during our SEP tests as well. Concerning our current work, the federal or selected state-based marketplaces approved subsidized coverage for 9 of our 12 fictitious applicants seeking coverage during an SEP for 2016. Three of our 12 fictitious applicants were denied. Figure 1 summarizes the outcome of the 12 fictitious applications, which are discussed in greater detail below. The federal and selected state-based marketplaces requested supporting documentation for 6 of our 12 fictitious applicants who initially applied online or by telephone seeking coverage during an SEP. On the basis of our design for the scenario, we provided the federal and selected state- based marketplaces either no supporting documentation or fictitious documentation related to the SEP triggering event. As described below, in some instances we provided fictitious documents to the federal and selected state-based marketplaces to support the SEP triggering event and were able to obtain and maintain subsidized health coverage. Our applicant experiences are not generalizable to the population of applicants or marketplaces. The federal or selected state-based marketplaces approved coverage and subsidies for 9 of our 12 fictitious applicants who initially applied online or by telephone seeking coverage during an SEP, as of October 2016. For these 9 applications, we were approved for APTC subsidies, which totaled about $1,580 on a monthly basis, or about $18,960 annually. These 9 applicants also each were approved for CSR subsidies, putting them in a position to further benefit if they used medical services. However, in our tests, our fictitious applicants did not seek medical services. The federal or state-based marketplaces denied coverage for 3 of our 12 fictitious applicants. Specifically: One applicant stated that the applicant did not receive any response from the marketplace after attempting to enroll in a health plan in a community center in January 2016. This fictitious applicant claimed that the applicant had applied for coverage to the federal Marketplace and that the applicant did not discover the applicant was not enrolled in a health plan until June 2016—about 6 months after the applicant’s claimed initial contact with the marketplace. The marketplace representative stated that the applicant needed to follow up with the marketplace and select a health plan within 60 days of the SEP- triggering event, which in this case was misinformation or misrepresentation by a non-Marketplace entity providing enrollment assistance in January of 2016. Under CMS regulations, for this type of triggering event the marketplace may define the length of the SEP as appropriate based on the circumstances, up to a maximum of 60 days. The second fictitious applicant who applied for coverage to the state- based marketplace and was denied coverage claimed that the applicant initially tried to enroll in January 2016 and was told by a certified enrollment counselor that the applicant qualified for a high deductible plan, but did not qualify for a premium tax credit or CSR. When the applicant followed up with the state-based marketplace to obtain additional information, the marketplace representative requested the name of the enrollment counselor with whom the applicant initially applied for health coverage in January 2016. The applicant provided the representative with a fictitious enrollment counselor name and location. The marketplace representative stated that the marketplace’s application notes show that the applicant applied outside open enrollment and informed the applicant that the applicant could submit an appeal to further review the application. The applicant later received a letter from the selected state-based marketplace stating that, based on what the applicant told the marketplace about the event that occurred in June 2016, the applicant did not qualify for a special enrollment period at that time. The third fictitious applicant who was denied claimed an inability to apply for coverage during the open enrollment period because the applicant experienced a serious medical condition, had been hospitalized unexpectedly in January 2016, and needed rehabilitation through May 2016. The federal marketplace representative stated that the representative could not enroll the applicant in a health plan outside of open enrollment because the SEP event was an exceptional circumstance and CMS has to approve enrollment of these types of SEP triggering events. The representative explained that the representative would have to submit an escalation to CMS for our fictitious applicant to be approved to enroll during the SEP. After the application was escalated, the federal marketplace denied this application, and a federal marketplace representative we spoke with stated that the applicant could have applied in November and December, before the unexpected hospitalization. According to CMS officials, the federal marketplace makes eligibility determinations on a case-by-case basis for those applicants who experience an unexpected hospitalization that prevents them from enrolling during the open enrollment period. The federal and selected state-based marketplaces requested supporting documentation for 6 of our 12 fictitious applicants who initially applied online or by telephone seeking coverage during an SEP. The 6 remaining fictitious applicants were not instructed to provide supporting documentation related to the SEP triggering event. As previously mentioned, the federal or state-based marketplaces approved subsidized health-insurance coverage for 9 of our 12 fictitious applicants and denied coverage for 3 of our 12 fictitious applicants. As mentioned, for all 12 fictitious applicants, we submitted supporting documentation related to proof of identity and income, such as a copy of the Social Security card, driver’s license, and self-employment ledger. We designed the 12 fictitious applications to provide either no documentation or fictitious documentation related to the SEP event to note any differences in outcomes. As previously mentioned, we used professional judgement to determine what type of documentation we would submit related to the SEP triggering event. For 9 of the 12 applications, GAO provided no documents or fictitious documents to support the SEP triggering event and was able to obtain and maintain subsidized health coverage. Figure 2 summarizes document submissions and outcomes for the 12 fictitious applications for subsidized qualified health-plan coverage during an SEP. Officials from the marketplaces explained that they do not require applicants to submit documentation to support certain SEP triggering events. For other SEP triggering events, CMS officials explained that the standard operating procedure in the federal marketplace is to enroll applicants first, and verify documentation to support the SEP triggering event after enrollment. As previously mentioned, there is no specific legal provision that requires federal and state-based marketplaces to verify the events that trigger an SEP, but in February 2016 CMS announced plans to begin a Special Enrollment Confirmation Process, which involves requesting supporting documentation to verify certain events that would trigger an SEP. CMS announced that it would begin requesting supporting documentation from consumers who enroll or change plans using an SEP for selected triggering events on June 17, 2016. We started our testing after June 17, 2016. State-based marketplaces are not required to participate in the CMS Special Enrollment Confirmation Process and may establish their own processes. According to state and federal officials, all applicants that apply for enrollment during an SEP must attest under penalty of perjury that they meet the conditions of eligibility for the SEP. However, relying on self-attestation without verifying documents submitted to support a SEP triggering event could allow actual applicants to obtain subsidized coverage they would otherwise not qualify for. Three of our six fictitious applicants to the federal Marketplace claimed eligibility based on an SEP triggering event covered by the CMS Special Enrollment Confirmation Process and were instructed by the federal marketplace to provide supporting documentation to prove eligibility to enroll through the SEP. As of October 2016, the three fictitious applicants that claimed eligibility based on an SEP triggering event covered by the CMS Special Enrollment Confirmation Process are currently enrolled in a subsidized qualified health plan. Two of these three fictitious applicants were asked to submit documents to support their SEP event, but obtained and maintained subsidized health coverage without providing any documentation to support their SEP event. The third of these three individuals submitted fictitious documents supporting the SEP event in response to the federal Marketplaces’ request and subsequently obtained and maintained subsidized health coverage. The remaining three of six applicants to the federal Marketplace claimed eligibility based on SEP events that were not covered by the CMS Special Enrollment Confirmation Process and (as such) were not instructed to provide supporting documentation to prove eligibility to enroll through the SEP. For example, one of these three applicants to the federal marketplace claimed to have applied for Medicaid during the annual open enrollment but was subsequently denied Medicaid after open enrollment had closed—which is not an event covered by the process. This applicant did not provide documentation to support this claim, and the applicant obtained subsidized coverage. The remaining two applicants to the federal marketplace were denied, as described previously in this report. Two of the six fictitious applicants to the selected state-based marketplaces were instructed by the state-based marketplace to provide supporting documentation proving eligibility to enroll through the SEP. For one of the two fictitious applicants, we did so, and the fictitious applicant is currently enrolled in a qualified health plan. The other applicant was denied, as described previously in this report. The remaining four of our six fictitious applicants to the selected state- based marketplaces were not instructed by the state marketplace to provide supporting documentation to prove eligibility to enroll through the SEP. Two of these four fictitious applicants were able to obtain and maintain subsidized health insurance through the marketplace without providing supporting documentation related to the SEP and are currently enrolled in a qualified health plan. As mentioned, in some instances, we provided fictitious documents to the federal and selected state-based marketplaces to support the SEP triggering event and were able to obtain and maintain subsidized health coverage. After the conclusion of our undercover testing, when we spoke with federal and selected state-based marketplace officials about the outcomes of our fictitious applicants, the federal and selected state- based marketplace officials told us that unless a document appeared visibly altered, they accepted it. For example, one of our fictitious applicants claimed that we were eligible to enroll during the SEP because the applicant recently lost health coverage. In response to our application, the federal marketplace required us to submit documentation to support this SEP triggering event, such as submitting a letter from an employer stating that coverage was terminated and the date the coverage ended. We submitted a fictitious letter from a fictitious employer with a fictitious telephone number indicating our coverage was terminated on a certain date. A marketplace representative later told us that the marketplace had received and verified the fictitious supporting documentation we submitted. The fictitious applicant obtained subsidized health coverage and has continued to maintain subsidized coverage to date. In another example, one of our fictitious applicants claimed that the applicant was eligible to enroll during the SEP because the applicant experienced a serious medical condition that prevented the applicant from enrolling in a plan during open enrollment. In response, the marketplace required us to submit documentation to establish our SEP triggering event. Specifically, the marketplace requested a letter from the doctor explaining the nature of the condition that kept us from enrolling during open enrollment. We submitted a fictitious doctor’s note with a fictitious doctor’s name and address, as well as a fake phone number that we could monitor. We were later notified that we had been approved. On the basis of our records, no one called the fake number we provided before we were approved for coverage. The fictitious applicant obtained subsidized health coverage and has continued to maintain subsidized coverage to date. As mentioned, there is no specific legal requirement that federal or state- based marketplaces authenticate the documents submitted to support an SEP event to determine whether those documents are fictitious. However, according to federal and state officials, all applicants that apply for enrollment during an SEP must attest under penalty of perjury that they meet the conditions of eligibility for the SEP. We provided a draft of this report to HHS, Covered California, and the District of Columbia (DC) Health Benefit Exchange Authority. Written comments from HHS, Covered California, and the DC Health Benefit Exchange Authority are summarized below and reprinted in appendixes II–IV, respectively. In their written comments, in terms of overall context, these agencies reiterated that they are not required to verify the events that trigger an SEP and instead rely on self-attestation and the associated penalties, which we acknowledge and state in this report. However, prudent stewardship and good management practices suggest that fraud risks be understood and managed to protect public funds. In addition to their formal written comments, all three agencies provided us with technical comments, which we incorporated, as appropriate, in this report. In its written comments, HHS stated that SEPs are a critical way for qualified consumers to obtain health coverage and that it is important that SEPs are not misused or abused. HHS also described several actions it is taking to better understand SEPs, including its efforts as part of the agency’s Special Enrollment Confirmation Process, which we also describe in this report. For example, according to HHS, beginning June 18, 2016, all consumers who complete a Marketplace application for an SEP included in the Special Enrollment Confirmation Process will read in their Eligibility Determination Notice next steps that they must take to prove their SEP eligibility along with a list of examples of documents they may submit. HHS noted that consumers who do not respond to requests for documentation or do not provide sufficient documentation could be found ineligible for their SEP and may lose coverage. In addition, HHS stated that it is implementing steps to improve program integrity, such as by conducting a fraud risk assessment of the Marketplace consistent with GAO’s fraud risk framework. As mentioned, we believe that performing a fraud-risk assessment consistent with our prior recommendations to HHS in this area could help address the control vulnerabilities we identified during our SEP tests. In its written comments, Covered California also stated that it is important to ensure that consumers who enroll in health coverage during an SEP have, in fact, experienced a qualifying life event. Covered California explained that its controls are in compliance with federal standards. Covered California also described processes it has in place to verify certain SEP events, such as the random sample of consumers who experience two qualifying life events: (1) loss of minimum essential coverage and (2) permanent move to and within California, as we describe in this report. Covered California also suggested that any requirement for marketplaces to authenticate documents provided by applicants for an SEP should also consider the burden that document authentication may impose on marketplaces, consumers, and the sources of such documents, such as doctors and insurers. In this regard, we did not evaluate the cost of authenticating such documents as part of our work because it was outside the scope of our review. In its written comments, the DC Health Benefit Exchange Authority described its policies and processes for verifying SEP eligibility by relying on self-attestation or reviewing information provided by consumers and others. Additionally, in its written comments, the DC Health Benefit Exchange Authority also raised concerns about our methodology. First, the agency stated that the DC marketplace is too different from other marketplaces to be an informative part of our review. Specifically, the comments state that the age of individuals enrolled in health insurance coverage in the DC marketplace – and enrolled through SEPs in particular – suggests that there is no evidence that consumers are waiting to get sick before enrolling in coverage. We did not evaluate the DC marketplace’s data on its population of enrollees and did not evaluate the DC marketplace’s conclusion that the age of its enrollee population shows that there is no systemic abuse of SEPs. Rather, our work focused on testing whether our fictitious applicants could obtain and maintain health coverage during an SEP by submitting fictitious documents or no documents to support our SEP triggering event. The DC marketplace is similar to the federal Marketplace and the other state-based marketplace we selected in that it relies on self-attestation to verify certain SEP events. The written comments from the DC Health Benefit Exchange Authority also expressed concern that the results of our testing are not useful to help improve the agency’s processes because we did not provide specific details of our undercover testing scenarios to the states included in our review. As we noted in our meetings with HHS and both of the state agencies included in our review, we did not provide certain details on our undercover testing scenarios to maintain the integrity of our undercover tests. Specifically, we did not provide details about our undercover tests that could risk inappropriately revealing the identities of our fictitious applicants and preclude the use of such identities in any future reviews. However, we did provide HHS and both of the state agencies with details of the scenarios, including the type of application submitted; the type of documentation we submitted; and the interaction with the marketplace representatives, among other things. Providing additional, specific details about our fictitious identities would not help the agency address any systemic vulnerabilities stemming from their reliance on self-attestation to verify eligibility for an SEP. Further, the DC Health Benefit Exchange Authority commented that our undercover tests are unrealistic because we produced fictitious documents to support our SEP events; that lying under penalty of perjury is a unique ability of our undercover investigators; and that our work assumes a significant number of individuals perjure themselves to access federal funds. These statements represent a misunderstanding of our methodology. First, as stated in our report, we used publicly available hardware, software, and materials to produce the counterfeit documents that we submitted for our testing. Using these same tools, potential fraudsters may realistically produce similarly counterfeit documents to support an SEP triggering event. Second, potential fraudsters have the ability—and very possibly the inclination—to lie under penalty of perjury to perpetrate their illegal schemes. Finally, our report makes no statements or assumptions about the number of individuals who perjure themselves to access federal funds. Rather, our report focuses on the results of our undercover testing of enrollment verification for 12 fictitious applications for subsidized health-insurance coverage during an SEP in 2016 to identify potential vulnerabilities in enrollment-verification controls. As mentioned above, prudent stewardship and good management practices suggest that fraud risks be understood and managed to protect public funds. The DC Health Benefit Exchange Authority’s written comments additionally stated that relying on self-attestation is a well-accepted practice in the federal government. The DC Health Benefit Exchange Authority also suggested that any requirement for marketplaces to authenticate documents provided by applicants for an SEP should consider the burden on both the marketplace and consumers. In this regard, we did not evaluate the cost of authenticating such documents as part of our work because it was outside the scope of our review. Further, while federal agencies may rely on self-attestation for certain aspects of their programs, such as those noted in the DC Health Benefit Exchange Authority’s written comments, federal agencies also take steps to verify information needed to determine eligibility for programs and benefits. For example, in compliance with the requirements of PPACA and CMS regulations, the federal and state-based marketplaces (including DC Health Link) verify information on applicant citizenship, nationality, or legal presence status by matching applicant data with data from federal agencies rather than relying on self-attestation for this information. Additionally, our prior work has found that relying on self-reported information can leave agencies vulnerable to fraud in some programs. Thus, it would be misleading to characterize reliance on self-attestation for conducting program integrity activities as a generally well-accepted practice in the federal government. Finally, the DC Health Benefit Exchange Authority’s written comments stated that its approach to verifying SEP events is consistent with the best practices in our fraud risk framework. For example, the DC Health Benefit Exchange Authority stated that it has reviewed the characteristics of the DC marketplace, consistent with the principles of our fraud risk framework, and assessed risk to develop appropriate verification procedures. However, we did not review, and are thus not able to corroborate, the DC Health Benefit Exchange Authority’s claim that its enrollment verification controls are consistent with our fraud risk framework. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Acting Administrator of CMS, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. We were asked to examine and test health-care marketplace enrollment and verification controls for a special enrollment period (SEP) for the 2016 coverage year. This report describes results of undercover attempts to obtain subsidized qualified health-plan coverage outside the open enrollment period for 2016; that is, during an SEP. To perform our undercover testing and describe the results of our undercover attempts to obtain new coverage during an SEP, we used 12 fictitious identities for the purpose of making applications to obtain subsidized qualified health- plan coverage offered through a marketplace by telephone and online. The Patient Protection and Affordable Care Act (PPACA) requires marketplaces to verify application information to determine eligibility for enrollment and, if applicable, determine eligibility for the income-based subsidies or Medicaid. These verification steps include validating an applicant’s Social Security number, if one is provided; verifying citizenship, U.S. nationality, or lawful presence in the United States; and verifying household income. The 12 identities were designed to pass these verification steps by providing supporting documentation—albeit fictitious—such as a copy of the Social Security card, driver’s license, and proof of income. We selected states within the federal Health Insurance Marketplace (Marketplace) and state-based marketplaces for our undercover applications, based on factors including state population, percentage of state’s population without health insurance, whether the state was selected for testing as part of our prior work, whether the state participates in the state-based marketplace or federal marketplace, and whether the states make the eligibility determination or assessment for other health-coverage programs, including Medicaid. Specifically, we selected two states—Virginia and Florida—that elected to use the federal marketplace rather than operate a marketplace of their own. We also selected two state-based marketplaces—Covered California (California) and DC Health Link (District of Columbia). The results obtained using our limited number of fictional applicants are illustrative and represent our experience with applications in the federal and state marketplaces we selected. The results cannot, however, be generalized to the overall population of applicants, enrollees, or marketplaces. Our undercover testing included fictitious applicants claiming to have experienced an event that would trigger eligibility to enroll in health- insurance coverage during an SEP. Specifically, our 12 fictitious applicants claimed to have experienced one of the following selected events that may indicate eligibility, under certain circumstances, to enroll in health coverage under an SEP: (1) loss of minimum essential health coverage; (2) gained access to new qualified health plans as a result of a permanent move to another state; (3) gained a dependent through marriage; (4) experienced an exceptional circumstance, such as a serious medical condition that prevented the consumer from enrolling during the annual open enrollment period; (5) nonenrollment during the annual open enrollment period was unintentional and the result of misinformation or misrepresentation by a non-exchange entity providing enrollment assistance or conducting enrollment activities; and (6) Medicaid application filed during the annual open enrollment period was denied after the open enrollment period had closed. We tested the six selected triggering events in the federal marketplace. We also tested the six selected triggering events in the two selected state-based marketplaces. We submitted three applications in each state and the District of Columbia. We selected these six SEP triggering events to create a balance between three events that are subject to the Centers for Medicare & Medicaid Services (CMS) Special Enrollment Confirmation Process and three events that are not subject to this process. In February 2016, CMS announced plans to begin requesting supporting documentation to verify certain events that would trigger an SEP. CMS announced that it would begin requesting supporting documentation from consumers who enroll or change plans using an SEP for selected triggering events on June 17, 2016. We started our testing after June 17, 2016. State-based marketplaces are not required to follow CMS’s Special Enrollment Confirmation Process, but states may choose to follow this guidance or establish their own processes, according to CMS officials. We made 6 of our applications online initially, and 6 by phone; however, for all our fictitious applicant scenarios, we sought to act as an ordinary consumer would in attempting to make a successful application. For example, if, during online applications, we were directed to make phone calls to complete the process or mail or fax the application, we acted as instructed. We also self-attested that the information provided in the application was true when instructed. In these tests, we also stated income at a level eligible to obtain both types of income-based subsidies available under PPACA—a premium tax credit, to be paid in advance, and cost-sharing reduction (CSR). As appropriate, in our applications for coverage and subsidies, we used publicly available information to construct our scenarios. We also used publicly available hardware, software, and material to produce counterfeit documents, which we submitted, as appropriate for our testing, when instructed to do so. We designed the 12 fictitious applications to provide either no documentation or fictitious documentation related to the SEP triggering event. We used professional judgement to determine what type of documentation we would submit related to the SEP triggering event. For example, for a marriage scenario, we did not provide a marriage certificate or any documentation including the fictitious spouse’s Social Security number. We did need to submit documentation related to the fictitious applicant’s income in the form of a self-employment ledger and avoided providing information supporting a fictitious spouse. In another example, we had to submit proof of identity to pass the general eligibility requirements for a scenario in which we claimed the applicant permanently moved to another state within the past 60 days. Thus, to avoid submitting documentation related to the SEP triggering event, we submitted a driver’s license, but we ensured it was from the original state and the issuance date was back-dated several years. We did not provide any documentation to support our permanent move to the new state. We did, however, self-attest to the permanent move. We then observed the outcomes of the document submissions, such as any approvals received or requests received to provide additional supporting documentation. For the applications that we were denied, we did not proceed with the appeals process. We conducted our performance audit from May 2016 to November 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. We conducted our related investigative work in accordance with investigative standards prescribed by the Council of the Inspectors General on Integrity and Efficiency. In addition to the contact named above, Marcus Corbin; Ranya Elias; Colin Fallon; Suellen Foth; Georgette Hagans; Barbara Lewis; Olivia Lopez; Maria McMullen; James Murphy; Jonathon Oldmixon; Gloria Proa; Christopher Schmitt; Julie Spetz; and Elizabeth Wood made key contributions to this report.
Under PPACA, consumers can enroll in health insurance coverage, or change from one qualified health plan to another, through the federal and state marketplaces either (1) during the annual open enrollment period or (2) outside of the open enrollment period, if they qualify for an SEP. A consumer may qualify for an SEP due to specific triggering events, such as a nonvoluntary loss of health-care coverage. CMS reported that 1.6 million individuals made a plan selection through an SEP in 2015. GAO was asked to test marketplace enrollment and verification controls for applicants attempting to obtain coverage during an SEP. This report describes the results of GAO attempts to obtain subsidized qualified health-plan coverage during the 2016 SEP in the federal marketplace and two selected state-based marketplaces—California and the District of Columbia. To perform the undercover testing of enrollment verification, GAO submitted 12 new fictitious applications for subsidized health-insurance coverage outside of the open enrollment period in 2016. GAO's applications tested verifications related to a variety of SEP triggering events. The results cannot be generalized to all enrollees. GAO provided a draft of this report to CMS and the selected state agencies. In their written comments, CMS and the states reiterated that they are not required to verify an SEP event and instead rely on self-attestation. However, prudent stewardship and good management practices suggest that fraud risks be understood and managed to protect public funds. The Patient Protection and Affordable Care Act (PPACA) requires that federal and state-based marketplaces verify application information—such as citizenship or immigration status—to determine eligibility for enrollment in a health plan, potentially including a subsidy. However, there is no specific legal requirement to verify the events that trigger a Special Enrollment Period (SEP), which is an opportunity period to allow an individual to apply for health coverage after events such as losing minimum essential coverage or getting married. Prior to the start of GAO's enrollment tests, the Centers for Medicare & Medicaid Services (CMS), which maintains the federal Health Insurance Marketplace (Marketplace), implemented a policy to request that federal Marketplace applicants provide supporting documentation for certain SEP triggering events. According to CMS, ensuring that only qualified applicants enroll during an SEP is intended to prevent people from misusing the system to enroll in coverage only when they become sick. However, relying on self-attestation without verifying documents submitted to support an SEP triggering event, such as those mentioned above, could allow actual applicants to obtain subsidized coverage they would otherwise not qualify for. The federal and selected state-based marketplaces approved health-insurance coverage and subsidies for 9 of 12 of GAO's fictitious applications made during a 2016 SEP. The remaining 3 fictitious applicants were denied. The marketplaces instructed 6 of 12 applicants to provide supporting documentation, such as a copy of a recent marriage certificate, related to the SEP triggering event; the remaining 6 of 12 were not instructed to do so. For 5 applicants, GAO provided no documents to support the SEP triggering event, but coverage was approved anyway. Officials from the marketplaces explained that they do not require applicants to submit documentation to support certain SEP triggering events. For other SEP triggering events, CMS officials explained that the standard operating procedure in the federal Marketplace is to enroll applicants first, and verify documentation to support the SEP triggering event after enrollment. The officials also noted that all applicants must attest to their eligibility for enrollment. GAO is not making any recommendations to the Department of Health and Human Services (HHS) in this report. However, GAO made eight recommendations to strengthen PPACA enrollment controls in a February 2016 report; these recommendations included conducting a fraud-risk assessment of the federal marketplace, consistent with the leading practices described in GAO's framework for managing fraud risks in federal programs. In formal comments on a draft of the February report, HHS concurred with the recommendations and outlined a number of steps it planned to take to implement them. In an April 2016 follow-up letter to GAO, HHS described a number of specific actions it had taken in response to the eight recommendations, such as creating an integrated project team to perform the Marketplace fraud-risk assessment. As of November 2016, GAO considers all eight recommendations to be still open, pending corroborating information, and will continue to monitor CMS's progress in implementing them. Implementing these recommendations by actions such as performing the fraud-risk assessment could help address the control vulnerabilities GAO identified during its most recent SEP tests.
Titles XVIII and XIX of the Social Security Act establish minimum requirements that all nursing homes must meet to participate in the Medicare and Medicaid programs, respectively. The Omnibus Budget Reconciliation Act of 1987 focused the requirements on the quality of care actually provided by a home. To help beneficiaries make informed decisions when selecting or evaluating nursing homes, CMS increased the amount of information publicly available on its Nursing Home Compare Web site in 2008 by rating the quality of each nursing home on a five-level scale. To assess whether nursing homes meet federal quality standards, state survey agencies conduct standard surveys, which occur roughly once per year, and complaint investigations. A standard survey involves a comprehensive assessment of quality standards. In contrast, complaint investigations generally focus on a specific allegation regarding resident care or safety made by a resident, family member, or nursing home staff. Federal quality standards focus on the delivery of care, resident outcomes, and facility conditions. These quality standards, totaling approximately 200, are grouped into 15 categories, such as Resident Rights, Quality of Care, Quality of Life, and Resident Behavior and Facility Practices. Nursing homes that meet these quality standards can be certified to participate in Medicare, Medicaid, or both programs. Homes may occasionally change their participation type, or, according to CMS, states may require nursing homes to change their participation type. We refer to this type of change that results in a new provider identification number as a “technical status change.” Such a change may affect the source of payment—Medicare or Medicaid—that the nursing home is eligible to receive. When a technical status change occurs, CMS’s SFF methodology as applied does not incorporate the nursing home’s complete survey history. States classify deficiencies identified during either standard surveys or complaint investigations in 1 of 12 categories according to their scope (i.e., the number of residents potentially or actually affected) and severity (i.e., the potential for or occurrence of harm to residents). (See table 1.) An A-level deficiency is the least serious and is isolated in scope, while an L-level deficiency is the most serious and is widespread throughout the nursing home. Nursing homes with deficiencies at the A, B, or C levels are considered to be in substantial compliance with quality standards, whereas nursing homes with D-level through L-level deficiencies are considered noncompliant. For most deficiencies, a home is required to prepare a plan of correction, and depending on the severity of the deficiency, surveyors may conduct a revisit to ensure that the nursing home has implemented its plan and corrected the deficiency. Revisits are not required for most deficiencies below the actual harm level—A through F. As we reported in May 2008, there can be considerable variation among states in the proportion of nursing homes cited for deficiencies at the G through L levels. We concluded that this interstate variation suggests that surveyors in some states are missing some serious deficiencies or understating their scope and severity. We provided examples of such understatement in our May 2008 report. Specifically, we reported that during fiscal years 2002 through 2007, about 15 percent of federal comparative surveys nationwide found that the state surveys had failed to cite at least one deficiency at the most serious levels of noncompliance (G through L levels) and about 70 percent of them found that the state surveys had failed to cite at least one deficiency at the potential for more than minimal harm level (D through F levels). When deficiencies are cited, federal enforcement actions known as sanctions can be imposed to encourage homes to make corrections. Sanctions are generally reserved for serious deficiencies—those at the G through L levels—that constitute actual harm and immediate jeopardy to residents. Sanctions include fines known as civil money penalties, denial of payment for new Medicare or Medicaid admissions, and termination from the Medicare and Medicaid programs. Such sanctions can affect a home’s revenues and therefore provide financial incentives to return to and maintain compliance. CMS requires states to refer for immediate sanction homes that receive at least one G- through L-level deficiency on successive standard surveys or intervening complaint investigations. In addition, a nursing home with one or more deficiencies at the F through L level—but not G level—in Quality of Care, Quality of Life, or Resident Behavior and Facility Practices must be cited for substandard quality of care (SQC), which generally results in the home’s losing its approval to hold in-house or facility-sponsored nurse aide training. Two of CMS’s efforts that identify poorly performing nursing homes are the SFF Program and the Five-Star System. CMS’s Nursing Home Compare Web site identifies nursing homes that are in the SFF Program, provides a rating of from one to five stars, and also includes data on deficiencies cited during standard surveys and complaint investigations, selected quality of care measures, and nurse staffing hours. Both the SFF Program and the Five-Star System score nursing homes by assigning points to deficiencies and the number of revisits, but the points assigned to certain deficiencies differ. CMS compiles a list of SFF candidates for each state generally on a quarterly basis by using the numeric score generated by its SFF methodology. The SFF candidates are those nursing homes with the 15 highest total scores in each state. From the candidate list, state officials select, with CMS concurrence, nursing homes they think should participate in the program based on their knowledge of the candidates’ circumstances. With the exception of Alaska, each state has between one and six SFFs, depending on the number of nursing homes in the state. CMS requires states to survey SFFs twice as frequently as other nursing homes to help motivate SFFs to improve. If an SFF meets CMS’s criteria for improved performance, CMS removes the SFF designation and the nursing home “graduates” from the program. According to CMS guidance to states, SFFs that fail to significantly improve after three standard surveys, or about 18 months, may be involuntarily terminated from Medicare and Medicaid. Nursing homes may also choose to terminate from Medicare and Medicaid voluntarily. (See fig. 1.) The SFF methodology assigns points to deficiencies on standard surveys and complaint investigations, and to revisits associated with deficiencies cited on standard surveys, as follows: Deficiencies. More points are assigned to deficiencies that are higher in scope and severity. Additional points are assigned to deficiencies classified as SQC. For example, a nursing home with one J-level deficiency in the Quality of Care category would be assigned 75 points (50 points plus an additional 25 points because the deficiency was SQC). See table 2 for a comparison of the deficiency points assigned by the SFF methodology and the Five-Star System. Revisits. Multiple revisits are an indicator of more serious problems in achieving or sustaining compliance. The points for revisits are as follows: 0 for the first revisit, 50 for the second revisit, an additional 75 (total 125) for the third revisit, and an additional 100 (total 225) for the fourth revisit. MS has changed its scope and From 1999 to 2004, each state had two SFFs at any one time, which they selected from a list of four candidates, and the SFF methodology assign a different numb eficiency data. der of points to deficiencies using only about 1 year of In 2005, CMS expanded the program’s scope by changing the number of SFFs from 1 to 6 per state (excluding Alaska), for a total of 136, and altered the SFF methodology by changing the points assigned to deficiencies and using about 3 years of deficiency data, weighted equally. In 2007, CMS began requiring states to notify a nursing home and its other accountable parties (i.e., the nursing home’s administrator, owners, operators, and governing body) when the nursing home was designated as an SFF. In 2008, CMS began designating SFFs on the Nursing Home Compare Web site and also changed the scoring methodology to assign weights to each year, such that the most recent year’s standard and complaint surveys are given the greatest weight. During the course of our work, CMS implemented its Five-Star System for nursing homes. Every nursing home in the United States is rated from one (much below average) to five (much above average) stars. The Five-Star System provides an overall quality rating based on individual ratings for three separate components: (1) assessment of federal quality standards from standard surveys and complaint investigations, which CMS refers to in the Five-Star System as health inspections; (2) ratings on nursing home staffing levels; and (3) ratings on quality of care measures. In December 2008, CMS’s Nursing Home Compare Web site began reporting the star ratings that nursing homes receive for each component of the Five-Star System as well as an overall quality rating. According to CMS officials, as of March 2009 the rating for the health inspections component was based on CMS’s SFF methodology, with one variation: the Five-Star System assigns more points to D- through I-level deficiencies than does the SFF methodology. (See table 2.) CMS explained that it changed some of the points assigned to deficiencies in the Five-Star System because the purpose is different from that of the SFF Program. The SFF Program focuses on facilities in each state whose performance is consistently extremely poor, and so it assigns many points to immediate jeopardy deficiencies relative to other, lower-level deficiencies. In contrast, the purpose of the Five-Star System is to distinguish performance across all nursing homes, rather than focus on the poorest performers, and so CMS modified the points to provide more emphasis on deficiencies at the potential for more than minimal harm and actual harm levels. The rating for the second component, staffing data, is based on two elements— total nursing hours per resident day and registered nurse hours per resident day. The rating for the third component of the Five-Star System is based on nursing home performance on 10 quality of care measures, such as percentage of high-risk residents who have pressure sores. We estimated that almost 4 percent—or 580—of the nation’s roughly 16,000 nursing homes could be considered the most poorly performing. These 580 homes overlap somewhat with the 755 SFF Program candidates and the 136 nursing homes actually selected as SFFs. For example, our estimate of 580 most poorly performing nursing homes includes (1) 302, or 40 percent, of the 755 SFF Program candidates as of December 2008 (see fig. 2) and (2) 65 nursing homes that 31 states selected as SFFs from among the SFF Program candidates, or about half of the active SFFs as of February 2009. In addition, our estimate resulted in some states having fewer or more poorly performing homes than CMS currently allocates to states under the SFF Program. For example, 10 states each had over 20 of the most poorly performing nursing homes. Indiana had the greatest number, with 52 such nursing homes, or almost 9 percent of the total of 580 homes. Eight states had no such nursing homes. (See fig. 3.) CMS has structured the SFF Program so that every state (except Alaska) has at least one SFF, and therefore the agency applies the SFF methodology to identify the 15 worst performing nursing homes in each state, which are not necessarily the worst performing homes in the nation. We developed an estimate that identified homes with worse compliance histories—more deficiencies at the potential for more than minimal harm level or higher and more revisits—than SFF Program candidates by applying CMS’s SFF methodology on a nationwide basis and using statistical scoring thresholds. These two changes had the greatest impact on the composition of the list of homes we identified as the most poorly performing compared to CMS’s approach. Our estimate also incorporated several refinements to the SFF methodology that moderately improve its ability to identify the most poorly performing nursing homes. Compliance history. Our estimate of 580 nursing homes identified homes with more deficiencies at the potential for more than minimal harm level or higher and more revisits, on average, compared to the 755 SFF Program candidates. For example, the most poorly performing nursing homes averaged 46.5 percent more actual harm–level deficiencies and 19.5 percent more immediate jeopardy–level deficiencies, compared to the 755 SFF Program candidates. (See table 3.) Nationwide estimate. We developed a nationwide estimate because the worst performing nursing homes in some states had high total scores from a combination of numerous deficiencies, serious deficiencies, and revisits, while the worst performing nursing homes in other states did not (see fig. 4). For example, in the preceding three cycles, we found that the worst performing nursing home in South Dakota had a score of about 68. The score was composed of 32 deficiencies at the D level or higher, 2 of which were at the actual harm level (where the highest scope and severity level was H) and none of which were at the immediate jeopardy level. In contrast, during the same three cycles, the worst performing nursing home in Tennessee had a score of about 1,512, with 63 deficiencies at the D level or higher. Of these deficiencies, 3 were at the actual harm level and 22 at the immediate jeopardy level (where the highest scope and severity level was L). Even the 15th highest-scoring home in Tennessee, with a score of about 253, had notably more deficiencies at the D level or higher and more severe deficiencies than the highest-scoring home in South Dakota: specifically, the Tennessee home had 63 deficiencies at the D level or higher, 2 of which were at the actual harm level and 8 of which were at the immediate jeopardy level. If CMS applied its SFF methodology to identify the worst 755 homes in the nation rather than the worst 15 in each state, the home ranked 755 would have a score of about 127; however, 48 percent of the SFF Program candidates had scores below this threshold. As a result, the SFF Program is missing some of the worst performing nursing homes in the nation. Statistical scoring thresholds. Absent a fixed number of homes per state, we developed statistical scoring thresholds because there was no natural break point delineating the most poorly performing nursing homes from all other homes. The two statistical scoring thresholds we used were conservative, because they focused on chronic poor performance and nonchronic, very poor performance. About 87 percent of the 580 nursing homes that we identified as the most poorly performing exhibited chronic poor performance; that is, they had high scores in at least two of the three cycles measured, as well as a high total score. The remaining roughly 13 percent of nursing homes had nonchronic but very poor performance; that is, they had very serious poor performance in one cycle only, which resulted in a very high total score. Homes that met our chronic poor performance threshold had total scores above the 93rd percentile for all nursing homes, or total scores ranging from approximately 168 to approximately 1,017. All of the nonchronic but very poor performing homes had total scores at or above the 99th percentile for all nursing homes, or total scores ranging from approximately 330 to approximately 1,577. Table 4 summarizes the compliance history of two of the most poorly performing homes identified by our estimate, and appendix II provides a detailed compliance history for these two homes. Additional homes might have been identified as the most poorly performing had we used different thresholds. For example, one nursing home with a total score of about 324 did not meet our definition for chronic poor performance and was below the threshold of 330 for nonchronic, very poor performance. During the three-cycle period, this nursing home had 41 D- through F-level deficiencies, 5 immediate jeopardy deficiencies, and a second revisit that contributed to the score, but most of the deficiencies and the revisit occurred in one cycle. Refinements made to CMS’s SFF methodology. Our three refinements to CMS’s SFF methodology had a moderate effect on the composition of the list of homes we identified as the most poorly performing. Deficiency points. We believe that the deficiency points used in the Five- Star System are more appropriate for identifying the most poorly performing nursing homes nationwide than those used in the SFF methodology because they compensate somewhat for understatement and the interstate variation in the citation of serious deficiencies. First, given the significant disparity between immediate jeopardy (50 to 150 points) compared to lower-level deficiencies (2 to 6 points for D- through F-level deficiencies), our use of SFF deficiency points to identify the most poorly performing nursing homes nationwide might have missed poorly performing nursing homes in states with significant understatement. Second, there is considerable interstate variation in the citation of serious deficiencies, including immediate jeopardy–level deficiencies. For example, in 2008, about 11.3 percent of deficiencies were at the immediate jeopardy level in one state, but less than 1.0 percent of deficiencies were cited at that level in 26 states. The Five-Star System, on average, doubles the points assigned to deficiencies below the immediate jeopardy level, giving a D-level deficiency 4 points and a G-level deficiency 20 points, compared to 2 and 10 points, respectively, using the SFF deficiency methodology. As a result, using the Five-Star System deficiency points, homes with numerous D- through I-level deficiencies are more likely to be identified as the most poorly performing. CMS officials told us that they planned to evaluate the effect of using the Five-Star System deficiency points on identifying SFF candidates; our analysis showed that it changed the composition of SFF Program candidates by an average of about 2.5 candidates per state. Substandard quality of care. In comparison with the SFF methodology and the Five-Star System, we assigned 5 more points to G-level deficiencies that occurred in any of the three categories of standards that CMS considers to be SQC. As noted earlier, CMS does not classify any G-level deficiencies as SQC. Without this modification, an F-level deficiency in an SQC area is assigned the same number of points as a G- level deficiency even if the G-level deficiency is in an SQC standard—10 and 20 points, respectively, under the SFF or Five-Star System methodologies. (See table 2.) This adjustment was important because approximately 45 percent of all nursing homes had one or more G-level deficiencies in an SQC category during the three cycles used for calculating SFF scores. Therefore, assigning SQC points to G-level deficiencies had an effect on total scores for the nursing homes, which we used to determine the most poorly performing homes nationwide, and would have an effect on the composition of CMS’s SFF candidate list. For example, about 4 percent of the SFF Program candidates—or less than one candidate per state on average—would change if CMS assigned SQC points to G-level deficiencies. Technical status changes. While the SFF methodology does not consider all deficiencies and revisits identified within the three-cycle period that occurred before the nursing home’s technical status change, we incorporated the full histories of nursing homes that underwent a technical status change. At the time of a technical status change, a new provider identification number is assigned, and the nursing home’s complete history under the old number is not combined with that of the new provider number. For example, a nursing home with a status change on January 1, 2008, might have a compliance history for only 1 year at the time we did our work instead of the three cycles called for in the SFF methodology. The SFF scores for nursing homes that have undergone technical status changes within the last three cycles are almost always lower than would be the case if three cycles of deficiency history were included and, therefore, more favorable than would be justified by the complete history. We found that almost 1 percent of all nursing homes (148), including 11 of the 580 we identified as the most poorly performing, had a technical status change during the last three cycles that affected their SFF scores. For most states, this change would not have affected their SFF candidate lists. Compared to all other nursing homes, the most poorly performing nursing homes in the nation averaged notably more deficiencies at the D level or higher, more serious deficiencies, and more revisits. They were also more likely to be for-profit and part of a chain and have more beds and residents. In addition, they had an average of almost 24 percent fewer registered nurse hours per resident per day. Compared to all other nursing homes, deficiencies over the last three cycles at the actual harm (G through I) level occurred over 5 times as often, and deficiencies at the immediate jeopardy (J through L) level occurred 15 times as often for the most poorly performing homes. (See table 5.) Furthermore, we found that revisits were made to the most poorly performing nursing homes 6 times as often as to all other nursing homes. The most poorly performing nursing homes were more frequently cited for deficiencies in important care areas and specific standards related to the delivery of care compared to all other nursing homes. Seven of the 10 most frequently cited deficiencies at the immediate jeopardy level involved standards in the categories of care that CMS considers to be SQC and four of the 10 are related to abuse or neglect. For example, about 42 percent of the most poorly performing nursing homes had at least one immediate jeopardy deficiency related to being free of accident hazards in the last three cycles, compared with about 5 percent for all other nursing homes. (See table 6.) A larger proportion of the most poorly performing nursing homes were cited for actual harm in each of the three SQC areas—about 90 percent in Quality of Care, about 31 percent in Resident Behavior and Facility Practices, and about 17 percent in Quality of Life. In comparison, a smaller proportion of all other nursing homes were cited for actual harm in those same categories of care—about 42 percent in Quality of Care, about 6 percent in Resident Behavior and Facility Practices, and about 2 percent in Quality of Life. (See app. III.) We also found that from fiscal years 2006 through 2008 the most poorly performing nursing homes were much more likely to have had deficiencies that could have resulted in the imposition of at least one immediate sanction compared to all other nursing homes. For example, in fiscal year 2008, about 33 percent of the most poorly performing homes may have been at risk of having at least one immediate sanction imposed, compared to about 4 percent for all other nursing homes. Nursing homes that receive at least one G- through L-level deficiency on successive standard surveys or complaint investigations must be referred for immediate sanctions, and about 15 percent of the deficiencies for the most poorly performing nursing homes, on average over the last three cycles, were at the actual harm or immediate jeopardy level. We found that the most poorly performing nursing homes differed from all other nursing homes in terms of the proportion of each group that was chain affiliated, for-profit, or both. They also differed in size and nurse staffing. Type of organization. We found that the most poorly performing nursing homes were less likely to be hospital based compared to all other nursing homes. Additionally, compared to all other nursing homes, we found that the most poorly performing nursing homes were more likely to be part of for-profit organizations, more likely to be affiliated with a chain organization, and more likely to be both for-profit and affiliated with a chain organization. About 55 percent of the most poorly performing nursing homes were for-profit and chain affiliated, compared to about 41 percent of all other homes. (See table 7.) Participation in Medicare and Medicaid. We found that a higher percentage of the most poorly performing homes participated in both Medicare and Medicaid, and a smaller percentage of such homes participated only in Medicare or only in Medicaid, compared to all other nursing homes. (See table 8.) Beds and residents. We found that a larger percentage of the most poorly performing homes had more than 100 beds, compared to all other nursing homes. On average, the most poorly performing nursing homes had about 23 percent more beds than all other nursing homes. Additionally, our analysis found that on average, the most poorly performing homes had almost 14 percent more residents, a lower occupancy rate, and a greater share of Medicaid patients. (See table 9.) Nurse staffing levels. Compared to all other nursing homes, the most poorly performing homes had almost 24 percent fewer registered nurse hours per resident per day on average. One effect of this difference is that the most poorly performing nursing homes averaged fewer registered nurse hours per resident per day as a share of total nursing hours. Specifically, registered nurse hours made up about 8 percent of total nurse staffing hours in the most poorly performing nursing homes, compared to about 10 percent in all other nursing homes. (See table 10.) Our estimate of the most poorly performing nursing homes nationwide is more than four times greater than the 136 homes that receive enhanced scrutiny under CMS’s SFF Program. We believe that our estimate is conservative, because we focused only on those nursing homes with chronic poor performance over time or with very poor performance in one survey cycle. Because of resource constraints, CMS limits the size of the SFF Program, requiring every state except Alaska to select from 1 to 6 homes—an allocation based on the number of nursing homes in each state—from a list of 15 candidates. The homes selected are not necessarily the most poorly performing homes in the nation but rather are among the poorest performers in each state. In contrast, the 580 homes we identified have more deficiencies at the potential for more than minimal harm level or higher and more revisits on average than the 755 homes identified as potential SFF candidates using CMS’s SFF methodology. Our estimate also revealed that the state-by-state distribution of the most poorly performing homes nationwide is uneven, calling into question the approach CMS uses to allocate SFFs across states. Furthermore, we believe that CMS’s SFF Program and the Five-Star System could be strengthened by incorporating the three enhancements we made to identify the most poorly performing homes nationwide: First, we adopted the deficiency points that CMS developed for its Five- Star System because they compensate somewhat for understatement and the interstate variation in the citation of serious deficiencies, an important consideration for our nationwide estimate of the most poorly performing nursing homes. Currently, CMS uses a different set of deficiency points for the SFF methodology, but agency officials told us that they planned to study the effect of using a common set of numeric points—the Five-Star System deficiency points—for both methodologies. Second, we added points for G-level deficiencies in the three standard areas that CMS considers to be an indication of SQC. We found that about 4 percent of the SFF candidates—or less than 1 candidate per state, on average—would change if CMS assigned SQC points to G-level deficiencies when they were cited in an SQC area. Without such an adjustment, an F-level deficiency in an SQC area would receive the same number of deficiency points as a G-level deficiency in the same standard area. Approximately 45 percent of all nursing homes had one or more G-level deficiencies in an SQC category during the three cycles used for calculating SFF scores. Third, we incorporated the full compliance history of homes that underwent technical status changes. For example, a nursing home with a technical status change on January 1, 2008, might have a compliance history for only 1 year at the time we did our work instead of the three cycles called for by the SFF methodology. The SFF scores of homes that have undergone technical status changes within the last three cycles are almost always lower than if all three cycles were considered. We also found that the Five-Star System does not accurately take into consideration technical status changes because it imputes a total score to account for one missing standard survey rather than using actual survey results. To improve the targeting of scarce survey resources, the Administrator of CMS should consider an alternative approach for allocating the 136 SFFs across states, by placing more emphasis on the relative performance of homes nationally rather than on a state-by-state basis, which could result in some states having only one or not any SFFs and other states having more than they are currently allocated. To improve the SFF methodology’s ability to identify the most poorly performing nursing homes, the Administrator of CMS should make the following three modifications: 1. Consider using a common set of numeric points for identifying poorly performing nursing homes by determining the effect of adopting those associated with the Five-Star System for the SFF methodology. 2. Assign points to G-level deficiencies in SQC areas equivalent to those additional points assigned to H- and I-level deficiencies in SQC areas. 3. Account for a nursing home’s full compliance history regardless of technical status changes. To ensure consistency with the SFF methodology, CMS should also consider making two of these modifications—the SQC and full compliance history changes—to its Five-Star System. We obtained written comments on our draft report from CMS, which are reprinted in appendix IV. CMS noted that our report adds value regarding the methods that CMS and the nursing home industry should use to address the issue of homes that consistently demonstrate quality of care problems and indicated that the agency would seriously consider all of our recommendations. CMS generally agreed in principle with our recommendations. In response to our first recommendation, CMS noted that it would evaluate a “hybrid” approach that would assign some SFFs using homes’ performance in each state and other SFFs on their relative national ranking. If implemented, CMS’s proposed hybrid approach would address our recommendation that it consider placing more emphasis on the relative performance of homes nationally, which might result in some states having fewer SFFs and others having more than their current allocation. We did not recommend that CMS allocate SFFs solely on the basis of the relative performance of homes nationally, an approach CMS would disagree with according to its comments. CMS agreed in principle with our remaining recommendations—intended to improve the SFF methodology’s ability to identify the most poorly performing nursing homes and ensure its consistency with the agency’s Five-Star System—and noted that it would evaluate the effects of adopting them. The agency explained that there might be technical barriers to fully implementing our recommendation that it account for a nursing home’s full compliance history regardless of technical status changes, but noted that it would implement the recommended adjustment to the maximum extent practicable. CMS agreed that although this change would affect a small number of providers, it would improve the accuracy of ratings for those providers. CMS also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Administrator of the Centers for Medicare & Medicaid Services and appropriate congressional committees. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This appendix provides a more detailed description of our scope and methodology. To determine the number of most poorly performing nursing homes in the nation and compare their performance to that of homes identified using the Centers for Medicare & Medicaid Services’ (CMS) approach, we began by interviewing agency officials about the Special Focus Facility (SFF) Program and methodology and by reviewing documentation related to the methodology. In addition, we interviewed officials in all 10 CMS regional offices and 14 state survey agencies regarding their impressions of the SFF methodology and also asked the state survey agencies what they consider to be indicators of poor performance. To ensure that we calculated the scores for each nursing home consistent with CMS’s SFF methodology, we obtained a copy of the computer programming that CMS used to score and rank nursing homes, verified that our use of CMS’s program generated results that were consistent with output on scores that CMS provided to states, and used the program as the basis for our estimate of the most poorly performing nursing homes in the United States. The SFF methodology creates a total score for each nursing home over three cycles by assigning points to the following data, which we obtained from CMS’s On-Line Survey, Certification, and Reporting system (OSCAR) database: (1) deficiencies cited on the three most recent standard surveys, (2) deficiencies cited on the last 3 years of complaint investigations, and (3) revisits associated with the three most recent standard surveys. Additional points are assigned to deficiencies classified as substandard quality of care (SQC). Each cycle consists of one standard survey, revisits associated with the standard survey, and 12 months of complaint investigations. We extracted these data from OSCAR in December 2008. To learn about methods used to rate nursing home performance, we interviewed officials of two nursing home associations—the American Health Care Association and the American Association of Homes and Services for the Aging, interviewed experts in long-term care research, attended meetings that CMS held to seek input from long-term care researchers on the development of the agency’s Five-Star Quality Rating System (Five-Star System), analyzed information available at CMS’s Providing Data Quickly Web site, reviewed prior GAO reports, and interviewed officials from some states with nursing home rating systems. We also reviewed documentation describing additional approaches to rating nursing home performance. Specifically, we reviewed eight nursing home rating systems, which considered a variety of rating factors. To determine the adequacy of the SFF methodology, we compared the methodology to other compliance-based measures of poor performance and tested the sensitivity of the methodology to variations, such as weighting. We compared the SFF methodology to two other compliance-based measures of poor performance—SQC and immediate sanctions. We found that those nursing homes with the worst total scores in the nation were much more likely to have met the criteria for SQC in the last 1, 2, and 3 years compared to all other nursing homes. Similarly, we found that the same nursing homes were much more likely to have had deficiencies that could have resulted in the imposition of at least one immediate sanction in the last 1, 2, and 3 years compared to all other nursing homes. In addition, we tested the sensitivity of CMS’s SFF methodology to several variations, some of which led us to consider making modifications to the methodology that affected facility scores. For example, in one variation, we modified the SFF methodology so that the cycle scores were no longer weighted—CMS began weighting the cycle scores in June 2008. We concluded from this test that the SFF methodology was sensitive to weighting, which influenced our decision to impose a requirement in our scoring thresholds such that the most poorly performing nursing homes have high scores in at least two of three cycles or a very high score overall. Based on our examinations of the SFF methodology, document review, and interviews, we concluded that the SFF methodology is reasonable and comprehensive because it uses multiple years of data, includes all deficiencies as opposed to a subset of deficiencies, includes deficiencies from standard surveys and complaint investigations, and accounts for the scope and severity of deficiencies and the number of revisits. Furthermore, CMS has refined the SFF methodology over time. Although we concluded that estimating the number of the most poorly performing nursing homes on a state-by-state rather than on a national basis would yield inconsistent results, we determined that there was no natural break point that differentiated the most poorly performing nursing homes from all other homes. As a result, we investigated several statistical approaches and determined that Tukey’s method was appropriate because the distribution of nursing homes’ total scores is highly skewed. Tukey’s method is meant to identify the extreme ends of the distribution. It labels an observation as a potential outlier if its value is greater than the threshold identified by the following equation: Potential Outlier Threshold = Q3 + 1.5 * (Q3 – Q1) Where: Q3 = 75th percentile and Q1 = 25th percentile The range identified by (Q3 – Q1), called the interquartile range, covers 50 percent of the observations in the center of the distribution. We applied this method by identifying nursing homes that had scores that were above the potential outlier threshold. We then explored several options to identify the most poorly performing nursing homes using Tukey’s method as a basis. For each option, we analyzed the group of resulting nursing homes identified as poor performers and those missed by the thresholds. As the result of our examination of the SFF methodology and prior work in which we classified nursing homes as low, moderately, or high performing, we knew that nursing homes that have serious deficiencies in one year may not demonstrate consistent poor performance—what we term chronic poor performance in this report. Thus, another option we considered was to identify as a poor performer any nursing home that had a total score that (1) was above the potential outlier threshold and (2) was also above the potential outlier threshold for at least two of its three cycle scores. This option identified 507 nursing homes. Because these nursing homes had poor performance in at least two of three cycles as well as high total scores, we concluded that this threshold identified chronic poor performance. However, we found that when we limited the most poorly performing nursing homes to this group of chronic poor performers we missed some nursing homes with very poor performance that was not chronic. Therefore, we established a second threshold to identify those very poor performers—those nursing homes that were at or above the 99th percentile—or approximately 330—of total score. This threshold added another 73 nursing homes. To determine the characteristics of the most poorly performing nursing homes that distinguish them from all other nursing homes, we analyzed deficiencies and revisits from the three most recent cycles—that is, the three most recent standard surveys as of the date of our data extract (December 17, 2008) and any associated revisits, as well as deficiencies cited on complaint investigations conducted 3 years before our data extract. We also analyzed other data that describe the characteristics of nursing homes: a December 17, 2008, extract of other OSCAR variables; case-mix-adjusted nurse staffing hours available from CMS’s Five-Star System, which were dated November 2008; and a list developed by CMS of nursing homes whose deficiency histories could have subjected them to immediate sanctions, which we obtained from CMS in October 2008. Following are highlights of how we analyzed certain characteristics: We calculated the number of nursing homes in each fiscal year that had deficiencies that could have resulted in the imposition of at least one immediate sanction. Nursing homes self-report their ownership type. We created the ownership type of for-profit by combining three categories of for-profit nursing homes designated in CMS’s data (individual, partnership, and corporation) and the category of limited liability corporation. Similarly, we created the ownership type of nonprofit by combining three categories of nonprofit nursing homes (corporation, church related, and other), and the ownership type of government from the six designations made in CMS data (state, county, city, city/county, hospital district, and federal). CMS maintains a variable in its data called multi–nursing home (chain) ownership, which is self-reported by nursing homes and which we refer to as chain affiliation. According to CMS, multi–nursing home chains have two or more homes under one ownership or operation. We determined the percentage of nursing homes that were for-profit and chain affiliated, nonprofit and chain affiliated, or government owned and chain affiliated by combining the ownership type described above with CMS’s designation of multi–nursing home (chain) ownership. We used the number of beds certified for payment for Medicare, Medicaid, or both to calculate the following: the average number of beds per nursing home and the percentage of nursing homes by bed size category (0 to 49, 50 to 99, 100 to 199, and more than 199 beds). We calculated the percentage share of residents by resident type (Medicare, Medicaid, or other) by dividing the number of Medicare, Medicaid, and other patients by the number of total residents. We calculated the occupancy rate by dividing the total number of residents by the number of certified beds. We used certified beds to calculate the occupancy rate instead of total beds because CMS officials told us that certified beds provided more reliable information. We analyzed the following nurse staffing hours, which were case-mix adjusted by CMS for use in its Five-Star System: registered nurse hours per resident per day, licensed practical nurse and vocational nurse hours per resident per day, nurse aide hours per resident per day, and total staffing hours per resident per day. We calculated resident nurse hours as a share of the total. Unadjusted nurse staffing hours data are collected by CMS, self-reported by nursing homes, and represent staffing levels for a 2-week period before the state inspection. CMS case-mix adjusted the staffing data using the average minutes of nursing care used to care for residents in a given resource utilization group category as reflected in the Medicare skilled nursing facility prospective payment system. CMS acknowledges that the staff hours collected from nursing homes have certain limitations. In order to increase the accuracy and comprehensiveness of the staffing data, CMS has been investigating whether it can use nursing home payroll data to report staffing levels on the Nursing Home Compare Web site. The following table provides the detailed compliance history over three cycles for two of the most poorly performing homes in the nation. The following table provides the percentages of the most poorly performing and all other nursing homes that were cited for actual harm or immediate jeopardy by standards area over three cycles. John E. Dicken, (202) 512-7114 or [email protected]. In addition to the contact named above, Walter Ochinko, Assistant Director; Ramsey Asaly; Daniel Lee; Shannon Slawter Legeer; Jessica Morris; Jessica Nysenbaum; Dae Park; Roseanne Price; Jennifer Rellick; Kathryn Richter; and Jessica Smith made key contributions to this report. Medicare and Medicaid Participating Facilities: CMS Needs to Reexamine Its Approach for Funding State Oversight of Health Care Facilities. GAO-09-64. Washington, D.C.: February 13, 2009. Nursing Homes: Federal Monitoring Surveys Demonstrate Continued Understatement of Serious Care Problems and CMS Oversight Weaknesses. GAO-08-517. Washington, D.C.: May 9, 2008. Nursing Home Reform: Continued Attention Is Needed to Improve Quality of Care in Small but Significant Share of Homes. GAO-07-794T. Washington, D.C.: May 2, 2007. Nursing Homes: Efforts to Strengthen Federal Enforcement Have Not Deterred Some Homes from Repeatedly Harming Residents. GAO-07-241. Washington, D.C.: March 26, 2007. Nursing Homes: Despite Increased Oversight, Challenges Remain in Ensuring High-Quality Care and Resident Safety. GAO-06-117. Washington, D.C.: December 28, 2005. Nursing Home Quality: Prevalence of Serious Problems, While Declining, Reinforces Importance of Enhanced Oversight. GAO-03-561. Washington, D.C.: July 15, 2003. Nursing Homes: Public Reporting of Quality Indicators Has Merit, but National Implementation Is Premature. GAO-03-187. Washington, D.C.: October 31, 2002. Nursing Homes: Federal Efforts to Monitor Resident Assessment Data Should Complement State Activities. GAO-02-279. Washington, D.C.: February 15, 2002. Nursing Homes: Sustained Efforts Are Essential to Realize Potential of the Quality Initiatives. GAO/HEHS-00-197. Washington, D.C.: September 28, 2000. Nursing Home Care: Enhanced HCFA Oversight of State Programs Would Better Ensure Quality. GAO/HEHS-00-6. Washington, D.C.: November 4, 1999. Nursing Home Oversight: Industry Examples Do Not Demonstrate That Regulatory Actions Were Unreasonable. GAO/HEHS-99-154R. Washington, D.C.: August 13, 1999. Nursing Homes: Proposal to Enhance Oversight of Poorly Performing Homes Has Merit. GAO/HEHS-99-157. Washington, D.C.: June 30, 1999. Nursing Homes: Complaint Investigation Processes Often Inadequate to Protect Residents. GAO/HEHS-99-80. Washington, D.C.: March 22, 1999. Nursing Homes: Additional Steps Needed to Strengthen Enforcement of Federal Quality Standards. GAO/HEHS-99-46. Washington, D.C.: March 18, 1999. California Nursing Homes: Care Problems Persist Despite Federal and State Oversight. GAO/HEHS-98-202. Washington, D.C.: July 27, 1998.
In 1998, CMS established the Special Focus Facility (SFF) Program as one way to address poor performance by nursing homes. The SFF methodology assigns points to deficiencies cited on standard surveys and complaint investigations, and to revisits conducted to ensure that deficiencies have been corrected. CMS uses its methodology periodically to identify candidates for the program--nursing homes with the 15 worst scores in each state--but the program is limited to 136 homes at any point in time because of resource constraints. In 2008, CMS introduced a Five-Star Quality Rating System that draws on the SFF methodology to rank homes from one to five stars. GAO assessed CMS's SFF methodology, applied it on a nationwide basis using statistical scoring thresholds, and adopted several refinements to the methodology. Using this approach, GAO determined (1) the number of most poorly performing homes nationwide, (2) how their performance compared to that of homes identified using the SFF methodology, and (3) the characteristics of such homes. According to GAO's estimate, almost 4 percent (580) of the roughly 16,000 nursing homes in the United States could be considered the most poorly performing. These 580 homes overlap somewhat with the 755 SFF Program candidates--the 15 worst homes in each state--and the 136 homes actually selected by states as SFFs. For example, GAO's estimate includes 40 percent of SFF Program candidates and about half of the active SFFs as of December 2008 and February 2009, respectively. Under GAO's estimate, however, the most poorly performing homes are distributed unevenly across states, with 8 states having no such homes and 10 others having from 21 to 52 such homes. CMS has structured the SFF Program so that every state (except Alaska) has at least one SFF even though the worst performing homes in each state are not necessarily the worst performing homes in the nation. To identify the worst homes in the nation, GAO applied CMS's SFF methodology on a nationwide basis using statistical scoring thresholds and made three refinements to that methodology, which strengthened GAO's estimate. The scoring thresholds were (1) necessary because there were no natural break points that delineated the most poorly performing homes from all other nursing homes and (2) conservative, focusing on chronic poor performance generally over a 2- or 3-year period or very poor performance over about 1 year. The most poorly performing homes identified by GAO averaged over 46 percent more serious deficiencies that caused harm to residents and over 19 percent more deficiencies that placed residents at risk of death or serious injury (immediate jeopardy), compared to the 755 SFF Program candidates identified by CMS's approach. GAO's three refinements to CMS's SFF methodology had a moderate effect on the composition of the list of homes that GAO identified as the most poorly performing. First, deficiency points from CMS's Five-Star Quality Rating System were used because they decreased the disparity between immediate jeopardy and lower-level deficiencies, such as those with the potential for more than minimal harm, which compensates somewhat for the understatement of serious deficiencies in some states. Second, homes received extra points when certain actual harm deficiencies occurred in standards areas that CMS categorizes as substandard quality of care, an important change because we found that many homes had at least one such deficiency. Third, the full deficiency history of homes was included. CMS recognizes that its methodology overlooks deficiencies for some homes, which almost always results in scores that are lower than if all deficiencies were included in the scores. GAO found that the most poorly performing nursing homes had notably more deficiencies with the potential for more than minimal harm or higher and more revisits than all other nursing homes. For example, the most poorly performing nursing homes averaged about 56 such deficiencies and 2 revisits, compared to about 20 such deficiencies and less than 1 revisit for all other homes. In addition, the most poorly performing homes tended to be chain affiliated and for-profit and have more beds and residents.
Although the American people expect world-class public services and are demanding more of government, the public’s confidence in the government’s ability to address its demands remains all too low. The government’s successful implementation of information technology could improve this confidence. Indeed, according to the Council for Excellence in Government, “Electronic government can fundamentally recast the connection between people and their government. It can make government far more responsive to the will of the people and greatly improve transactions between them. It can also help all of us to take a much more active part in the democratic process.” Government use of Internet-based services is broadening and becoming more sophisticated. In particular, public sector agencies are increasingly turning to the Internet to conduct paperless acquisitions (electronic malls), provide interactive electronic services to the public, and tailor or personalize information. However, the government must still overcome several major challenges to its cost-effective use of information technology. At the beginning of this year we issued a series of reports—our Performance and Accountability Series—devoted to framing the actions needed to support the transition to a more results-oriented and accountable federal government. To the extent that the billions of dollars in planned IT expenditures can be spent more wisely and the management of such technology improved, federal programs will be better prepared to meet mission goals and support national priorities. However, we identified seven continuing IT challenges that are key to achieving this goal: strengthening agency information security, improving the collection, use, and dissemination of government information, pursuing opportunities for electronic government, constructing sound enterprise architectures, fostering mature systems acquisition, development, and operational practices, ensuring effective agency IT investment practices, and developing IT human capital strategies. Until these challenges are overcome, agencies are likely to continue to have fundamental weaknesses in their information resources and technology management and practices, which can negatively affect mission performance. Since 1990, we have also periodically reported on government operations that we have assessed as high risk because of their greater vulnerability to waste, fraud, abuse, or mismanagement. In January of this year, in the information resources and technology management area, we designated information security and three agency IT modernization efforts as high risk. We have reported governmentwide information security as high risk since 1997, and the three major modernization efforts since 1995. The federal government’s information resources and technology management structure has its foundation in six laws: the Federal Records Act, the Privacy Act of 1974, the Computer Security Act of 1987, the Paperwork Reduction Act of 1995, the Clinger-Cohen Act of 1996, and the Government Paperwork Elimination Act of 1998. Taken together, these laws largely lay out the information resources and technology management responsibilities of the Office of Management and Budget (OMB), federal agencies, and other entities, such as the National Institute of Standards and Technology. In general, under the government’s current legislative framework, OMB has important responsibilities for providing direction on governmentwide information resources and technology management and overseeing agency activities in these areas, including analyzing major agency information technology investments as part of the federal budget process. Among OMB’s responsibilities are ensuring agency integration of information resources management plans, program plans, and budgets for acquisition and use of information technology and the efficiency and effectiveness of interagency information technology initiatives; developing, as part of the budget process, a mechanism for analyzing, tracking, and evaluating the risks and results of all major capital investments made by an executive agency for information systems;directing and overseeing implementation of policy, principles, standards, and guidelines for the dissemination of and access to public information; encouraging agency heads to develop and use best practices in information technology acquisition; reviewing proposed agency information collections to minimize information collection burdens and maximize information utility and benefit; and developing and overseeing implementation of privacy and security policies, principles, standards, and guidelines. Federal departments and agencies, in turn, are accountable for the effective and efficient development, acquisition, and use of information technology in their organizations. For example, the Paperwork Reduction Act of 1995 and the Clinger-Cohen Act of 1996 require agency heads, acting through agency CIOs, to better link their information technology planning and investment decisions to program missions and goals; develop and implement a sound information technology architecture; implement and enforce information technology management policies, procedures, standards, and guidelines; establish policies and procedures for ensuring that information technology systems provide reliable, consistent, and timely financial or program performance data; and implement and enforce applicable policies, procedures, standards, and guidelines on privacy, security, disclosure, and information sharing. Another important organization in federal information resources and technology management—the CIO Council—was established by the President in July 1996—shortly after the enactment of the Clinger-Cohen Act. Specifically, Executive Order 13011 established the CIO Council as the principal interagency forum for improving agency practices on such matters as the design, modernization, use, sharing, and performance of agency information resources. The Council, chaired by OMB’s Deputy Director for Management with a Vice Chair selected from among its members, is tasked with (1) developing recommendations for overall federal information technology management policy, procedures, and standards, (2) sharing experiences, ideas, and promising practices, (3) identifying opportunities, making recommendations for, and sponsoring cooperation in using information resources, (4) assessing and addressing workforce issues, (5) making recommendations and providing advice to appropriate executive agencies and organizations, and (6) seeking the views of various organizations. Because it is essentially an advisory body, the CIO Council must rely on OMB’s support to see that its recommendations are implemented through federal information management policies, procedures, and standards. With respect to Council resources, according to its charter, OMB and the General Services Administration are to provide support and assistance, which can be augmented by other Council members as necessary. The information issues confronting the government in the new Internet- based technology environment rapidly evolve and carry significant impact for future directions. To effectively address these issues, we believe that the government’s current information resources and technology management framework could be strengthened by establishing a central focal point, such as a federal CIO. Increasingly, the challenges the government faces are multidimensional problems that cut across numerous programs, agencies, and governmental tools. Clearly, departments and agencies should have the primary responsibility and accountability for decisions related to IT investments and spending supporting their missions and statutory responsibilities. But governmentwide issues need a strong catalyst to provide substantive leadership, full-time attention, consistent direction, and priority setting for a growing agenda of government issues, such as critical infrastructure protection and security, e-government, and large-scale IT investments. A federal CIO could serve as this catalyst, working in conjunction with other high-level officials, to ensure that information resources and technology management issues are addressed within the context of the government’s highest priorities and not in isolation from these priorities. During the period of the legislative deliberations on the Clinger-Cohen Act, we supported strengthened governmentwide management through the creation of a formal CIO position for the federal government. In September 2000 we also called for the Congress to consider establishing a formal CIO position for the federal government to provide central leadership and support. As we noted, a federal CIO would bring about ways to use IT to better serve the public, facilitate improving access to government services, and help restore confidence in our national government. With respect to specific responsibilities, a federal CIO could be responsible for key functions, such as overseeing federal agency IT activities, managing crosscutting issues, ensuring interagency coordination, serving as the nation’s chief IT spokesman internationally, and maintaining appropriate partnerships with state, local, and tribal governments and the private sector. A federal CIO could also participate in establishing funding priorities, especially for crosscutting e-government initiatives, such as the President’s recently proposed e-government fund (estimated to include $100 million over three years), which is expected to support interagency e-government initiatives. Consensus has not been reached within the federal community on the need for a federal CIO. Department and agency responses to questions developed by the Chairman and Ranking Minority Member of the Senate Committee on Governmental Affairs regarding opinions about the need for a federal CIO found mixed reactions. In addition, at our March 2000 Y2K Lessons Learned Summit, which included a broad range of public and private-sector IT managers and policymakers, some participants did not agree or were uncertain about whether a federal CIO was needed. Even individuals or organizations that support a federal CIO disagree on the structure and authorities of this office. For example, as you know, the last Congress considered two proposals to establish a federal CIO: H.R. 4670, the Chief Information Officer of the United States Act of 2000, introduced by Representative Turner, and H.R. 5024, the Federal Information Policy Act of 2000, which you introduced. These bills shared a common call for central IT leadership from a federal CIO but they differed in how the roles, responsibilities, and authorities of the position would be established. H.R. 5024 vested in the federal CIO the information resources and technology management responsibilities currently assigned to OMB, as well as oversight of related activities of the General Services Administration and promulgation of information system standards developed by the National Institute of Standards and Technology. On the other hand, H.R 4670 generally did not change the responsibilities of these agencies; instead, it called on the federal CIO to advise agencies and the Director of OMB and to consult with nonfederal entities, such as state governments and the private sector. Senator Lieberman also plans to introduce an e-government bill, which is expected to include a provision establishing a federal Chief Information Officer. Different federal CIO approaches have also been suggested by other organizations. For example, in February, the Council for Excellence in Government recommended that the President (1) name an Assistant to the President for Electronic Government with cabinet-equivalent rank, who would chair a Public/Private Council on Electronic Government and (2) designate OMB’s Deputy Director for Management as Deputy Director for Management and Technology. The Council also called for the Deputy Director for Management and Technology, in turn, to create an Office of Electronic Government and Information Policy to be headed by a presidentially appointed, senate-confirmed federal CIO. In March, the GartnerGroup—a private research firm—called on the President to appoint a cabinet-level federal CIO within the Executive Office of the President. Some key areas that the GartnerGroup stated that the federal CIO should focus on include (1) advising the President on technology-related public policy, (2) developing and implementing federal e-government plans, (3) managing appropriated “seed money” for cross- agency e-government initiatives, and (4) developing standards for e- government interoperability and other IT-related transformation initiatives. CIOs or equivalent positions exist at the state level but no single preferred model has emerged. The specific roles, responsibilities, and authorities assigned to the CIO or CIO-type position vary, reflecting the needs and priorities of the particular government. However, some trends are apparent. Namely, according to the National Association of State Information Resource Executives (NASIRE), half the states have a CIO in place who reports directly to the governor. (Only eight states reported such an arrangement in a 1998 survey.) All but one of the remaining CIOs report to a cabinet-level officer or an IT board. In addition, some state CIOs work in conjunction with an advisory board or commission, and many of them serve as chair of a council of agency-level CIOs. As a former president of the National Association of State Information Resource Executives noted in prior testimony, “IT is how business is delivered in government; therefore, the CIO must be a party to the highest level of business decisions . . . needs to inspire the leaders to dedicate political capital to the IT agenda.” With respect to CIOs’ responsibilities, according to the NASIRE, the vast majority of states have senior executives with statewide authority for IT. In addition, state CIOs are usually in charge of developing statewide IT plans and approving statewide technical IT standards, budgets, personnel classifications, salaries, and resource acquisitions, although the CIO’s authority depends on the specific needs and priorities of the governors. In some cases, the CIO is guided by an IT advisory board. Examples of the diversity in CIO structures that states reported in 2000 to the Government Performance Project—administered by the Maxwell School of Citizenship and Public Affairs of Syracuse University in partnership with Governing Magazine—are as follows. A model in which the CIO has a strong link to the state’s highest official is Missouri’s Chief Information Officer who reports to the Governor’s office. Missouri’s CIO is responsible for, among other things, IT strategic planning and policy, IT procurement, e-government, and facilitating IT resource sharing across agencies. The CIO is also the liaison representing Missouri on national issues affecting IT functions of the state. Kansas uses a model in which the CIO has multiple reporting responsibilities, including reporting to an IT council and the Governor. The Kansas Chief Information Officer serves as the Executive Branch Chief Information Technology Officer reporting to the Information Technology Executive Council, Governor and the Secretary of Administration. The Kansas CIO (1) establishes project management standards, (2) approves bid specifications, (3) approves IT projects over $250,000, (4) reports project status, and (5) manages the Strategic Information Management 3-year plan. Kansas also has Chief Information Technology Officers for its legislative and judicial branches that also report to the Information Technology Executive Council, as well as to the Legislative Coordinating Council and Office of Judicial Administration, respectively. Finally, in the model used by Michigan, the CIO reports to the head of an executive agency—the Department of Management and Budget. The duties of the Michigan CIO include developing a statewide information technology architecture and standards, developing and managing a statewide telecommunications network, and coordinating and reengineering business processes throughout the state government. Certain key principles and success factors can provide insight into the establishment of a successful CIO organization—including at the federal level. In February we issued an executive guide that includes a framework of critical success factors and leading principles (see figure 1). We developed this framework based on interviews with prominent private- sector and state CIOs, as well as other research. Mr. Chairman, what may be of particular interest to this Subcommittee is that CIOs of leading organizations we interviewed described a consistent set of key principles of information management that they believed contributed to the successful execution of their responsibilities. These principles touch on specific aspects of their organizational management, such as formal and informal relationships among the CIO and others, business practices and processes, and critical CIO functions and leadership activities. While focused on the use of CIOs within organizations, many of the principles of the framework are applicable to a federal CIO position. Let me explain some of the key characteristics of the six fundamental principles described by CIOs we interviewed and important parallels that can be made to the establishment of a federal CIO. Recognizing the business transformation potential of IT, executives of leading organizations position their CIOs as change agents with responsibility for applying technology to achieve major improvements in fundamental business processes and operations. With CEO support, the CIOs are in a good position to significantly affect not only IT, but the entire business enterprise. Similarly, it is important that a federal CIO be assigned a prominent role in the government’s decisionmaking to create and set a clear agenda and expectations for how information management and information technologies can be effectively used to help improve government operations and performance. Diversities in corporate missions, structures, cultures, and capabilities prohibit a prescriptive approach to information management leadership. Instead, executives in leading organizations ensure that their CIO models are consistent with the business, technical, and cultural contexts of their enterprises. In conjunction with determining their CIO models, senior executives of leading organizations clearly define up front the roles, responsibilities, and accountability of their CIOs for enterprisewide information management, better enabling their CIOs to operate effectively within the parameters of their positions vis-à-vis those of their senior management counterparts (i.e., CFO, COO). These senior executives also provide their CIOs with the authority they need to effectively carry out their diverse responsibilities. The federal government is large, complex, and diverse. Indeed, many federal departments and agencies easily rival in size and complexity some of our nation’s largest corporations. In addition, virtually all the results that the federal government strives to achieve require the concerted and coordinated efforts of two or more agencies. These are the types of issues that are important to consider when establishing a federal CIO. For example, while it may not be realistic for a federal CIO to have explicit responsibility for agency IT investments, a federal CIO could be an important broker of solutions that require cross-agency cooperation and coordination. CIOs in leading organizations recognize that providing effective information management leadership and vision is a principal means of building credibility for their CIO positions. In addition, CIOs often outline plans of attack or roadmaps to help guide them in effectively implementing short- and long-term strategies. Further, CIOs participate on executive committees and boards that provide forums for promoting and building consensus for IT strategies and solutions. These types of responsibilities can effectively translate to a federal CIO as well. A federal CIO can help set and prioritize governmentwide IT goals, provide leadership for the governmentwide CIO Council, and actively participate in other advisory organizations, such as the CFO Council, the Procurement Executives Council, and the President’s Information Technology Advisory Committee. While there is no standardized approach to performance measurement, leading organizations strive to understand and measure what drives and affects their businesses and how to best evaluate results. Leading organizations use performance measures that focus on business outcomes such as customer satisfaction levels, service levels, and, in some instances, total requests satisfied. In addition, to properly collect and analyze information, leading organizations develop measurement systems that provide insight into their IT service delivery and business processes. Establishing an information feedback system allows organizations to link activities and functions to business initiatives and management goals. The Government Performance and Results Act is results-oriented legislation that is intended to shift the focus of government decisionmaking, management, and accountability from activities and processes to the results and outcomes achieved by federal programs. A key role for a federal CIO could be to help formulate consensus and direction on performance and accountability measures pertinent to information management in the federal government. Moreover, a federal CIO could help establish goals and measures for major governmentwide efforts, including for the CIO Council, and create a mechanism to report on the government’s progress in meeting these goals. This is a particularly important role since managers at the organizations we studied cautioned that IT performance measurement is in its infancy and measurement techniques are still evolving, partly due to changes in technology. In lieu of establishing either completely centralized or decentralized CIO organizations, leading organizations manage their information resources through a combination of such structures. In this hybrid, the CEO assigns central control to a corporate CIO and supporting CIO organization, while delegating specific authority to each business unit for managing its own unique information management requirements. This model is particularly appropriate for the federal government since the Clinger-Cohen Act of 1996 requires executive agencies to appoint CIOs to carry out the IT management provisions of the act and the broader information resources management requirements of the Paperwork Reduction Act. Accordingly, a federal CIO could help ensure overall IT policy direction and oversight for the government, and agency CIOs would be responsible for carrying out these policies, as appropriate for their agencies. In addition, a federal CIO could play a role in suggesting, through formal and informal means, how the government information resources and technology management structure should be organized, with particular emphasis on how such a structure can achieve cross-cutting functionally oriented government services. High-performance organizations have long understood the relationship between effective “people management” and organizational success. Accordingly, we found that leading organizations develop human capital strategies to assess their skill bases and recruit and retain staff who can effectively implement technology to meet business needs. Such strategies are particularly important since studies forecast an ever-increasing shortage of IT professionals, presenting a great challenge for both industry and the federal government. Complicating the issue further, serious concerns are emerging about the aging of the federal workforce, the rise in retirement eligibility, and the effect of selected downsizing and hiring freeze initiatives. Since human capital concerns are a governmentwide concern, this is one area in which a federal CIO could have a tremendous impact. Working with the Office of Personnel Management and OMB, the CIO could explore and champion initiatives that would aid agencies in putting in place solid IT workforce management and development strategies. In conclusion, Mr. Chairman, while information technology can help the government provide services more efficiently and at lower costs, many challenges must be overcome to increase the government’s ability to use the information resources at its disposal effectively, securely, and with the best service to the American people. A central focal point such as a federal CIO can serve in the essential role of ensuring that attention to information technology issues is sustained and improves the likelihood that progress is charted and achieved. Although our research has found that there is no one right way to establish a CIO position, critical success factors we found in leading organizations, such as aligning the position for value creation, are extremely important considerations. Finally, the experiences of statewide CIOs offer a rich set of experiences to draw on for ideas and innovation. As a result, it is critical that a federal CIO, as well as agency-level CIOs, develop effective working relationships with state CIOs to discuss and resolve policy, funding, and common systems and technical infrastructure issues. Such relationships are of growing importance as public entities work to establish effective e- government initiatives. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the Subcommittee may have at this time. For information about this testimony, please contact me at (202) 512-6240 or by e-mail at [email protected]. Individuals making key contributions to this testimony include Felipe Colon, Jr., and Linda Lambert. (310411)
The rapid pace of technological change and innovation has offered unprecedented opportunities for both the government and commercial sectors to use information technology (IT) to improve performance, reduce costs, and enhance service. A range of issues have emerged about how to best manage and integrate complex information technologies and management processes so that they are aligned with mission goals, strategies, and objectives. Although IT can help the government provide services more efficiently and at lower costs, many challenges must be overcome to increase the government's ability to use the information resources at its disposal effectively, securely, and with the best service to the American people. A central focal point such as a federal Chief Information Officer (CIO) can help ensure that attention to IT issues is sustained and increase the likelihood that progress is charted and achieved. Although GAO's research has found that there is no one right way to establish a CIO position, critical success factors GAO found in leading organizations, such as aligning the position for value creation, are extremely important considerations. Finally, the experiences of statewide CIOs offer a rich set of experiences to draw on for ideas and innovation. A federal CIO, as well as agency-level CIOs, must develop effective working relationships with state CIOs to discuss and resolve policy, funding, and common systems and technical infrastructure issues.
CDC has developed several guidelines for hospitals that describe and recommend practices to prevent or control HAIs, such as hand washing or the use of alcohol-based hand rubs, isolation of infected patients, proper sterilization of equipment, provision of antibiotics to patients before surgery, and annual vaccination of health care workers for influenza. Standards from CMS and hospital accrediting organizations provide a means for assessing hospital compliance with infection control standards that are also aimed at preventing or controlling HAIs. CDC issues both guidelines and guidance relevant to infection control and prevention in hospitals. Guidelines are based on scientific evidence, whereas guidance is usually provisional and limited in its supporting evidence. CDC’s infection control and prevention guidelines set forth recommended practices, summarize the applicable scientific evidence and research, and contain contextual information and citations for relevant studies and literature. Most of CDC’s infection control and prevention guidelines are developed in conjunction with HICPAC, an advisory body created in 1992 by the Secretary of HHS. According to its charter, HICPAC provides CDC and the Secretary with (1) advice and guidance on the practice of infection control and strategies for surveillance, prevention, and control of HAIs and related events in health care facilities; and (2) advice on the periodic updating of existing HAI guidelines, the development of new guidelines and evaluations, and other HAI policy statements. HICPAC currently consists of 14 voting members from various infection control disciplines throughout the United States, a designated staff person from CDC, and 15 nonvoting liaison members from government agencies and private organizations. When CDC and HICPAC select a topic for an infection control and prevention guideline, they begin with internal discussions. After selecting a topic, HICPAC members and CDC conduct research on the topic, which includes identifying and evaluating clinical studies relevant to the topic and developing recommended practices, as appropriate. The draft guidelines are written and reviewed by HICPAC members; circulated to outside experts to validate the content; and sent to other federal agencies for review and approval. Afterward, HICPAC members resolve issues raised during review in face-to-face meetings or conference calls with HICPAC members who wrote the guideline. The approved document is published in the Federal Register for a 45- to 60-day public comment period, after which comments are reviewed by HICPAC members. CDC publishes the final guideline in its Morbidity and Mortality Weekly Report, on its Web site, or through a professional journal. Hospital compliance with CMS’s or the accrediting organizations’ standards, including those related to infection control, is assessed on a regular basis. Unannounced on-site surveys, conducted by surveyors from CMS or the accrediting organizations, are a major component in the process by which hospitals’ compliance with health and safety standards is assessed. Standards interpretations are given by CMS primarily in its State Operations Manual, which is arranged by COP; by the Joint Commission in its Comprehensive Accreditation Manual for Hospitals: The Official Handbook, which identifies rationales and performance expectations that are used to measure each standard and is organized into 11 chapters of safety and quality standards such as “Medication Management” and “Leadership;” and by AOA’s standards manual, Accreditation Requirements for Healthcare Facilities, which provides explanations for surveyors and the scoring procedures along with its standards and is organized into 32 chapters. Based on the information documented during the survey, surveyors from each organization assess a hospital’s compliance with the standards. Hospitals are required to correct instances of noncompliance found during the survey. CMS’s policy is to survey hospitals every 3 years; however, this policy is contingent on CMS’s budget. In fiscal year 2007, CMS set a goal to survey hospitals on average once every 4.5 years, with no more than 6 years elapsing between surveys for any one hospital. Both the Joint Commission and AOA survey hospitals at least once every 3 years. The Joint Commission has additional components in its standards and survey process. First, it issues National Patient Safety Goals, which are requirements intended to promote specific improvements in patient safety. Officials at the Joint Commission told us that the goals are updated annually and derive primarily from informal recommendations made in the Joint Commission’s safety newsletter, Sentinel Event Alert, recommendations from the Sentinel Event Advisory Group, sentinel events reported to the Joint Commission, and a review of the patient safety literature. The goals target problem areas in health care, such as reducing the risk of patient injury resulting from a fall or encouraging patients’ active involvement in their own care. Each goal is reviewed during the on- site survey to determine compliance with it. Second, the Joint Commission conducts several “tracers” as part of its hospital surveys, during which the care provided to selected patients is followed or “traced” through the hospital in the same sequence in which the patient received it. Other requirements that a hospital must meet to be accredited by the Joint Commission include conducting an annual self-assessment of the hospital’s compliance with the Joint Commission standards and submitting data for selected measures of clinical performance, some of which are related to HAIs. CDC has 13 guidelines for hospitals on infection control and prevention, and in these guidelines CDC recommends almost 1,200 specific clinical practices for implementation to prevent HAIs and related adverse events. The practices generally are sorted into five categories—from strongly recommended for implementation to not recommended—primarily on the basis of the strength of the scientific evidence for each practice. Over 500 practices are strongly recommended. Within HHS, CDC and AHRQ conduct some activities to promote the implementation of recommended practices, but the activities are not based on clear prioritization of the practices, which may consider not only the strength of the evidence, but also other factors that can affect implementation, such as cost or organizational obstacles. CDC has 13 infection control and prevention guidelines, which contain 1,198 specific clinical practices that CDC recommends for preventing HAIs. (See table 1.) The hand hygiene guideline, for example, strongly recommends that health care workers decontaminate their hands before having direct contact with patients. The number of recommended practices for each guideline varies. For example, the 2003 guideline outlining environmental infection control practices contains 329 recommended practices, whereas the 2006 guideline for influenza vaccination of health care personnel has 6 recommended practices. The earliest of the guidelines, which was on catheter-associated UTIs, was published in February 1981, and as of December 2007, the most recent, a revision of the guideline for isolation precautions, was published in June 2007. The practices in these 13 guidelines are categorized primarily based on the strength of the scientific evidence, and these categories have changed over time. Basing the categories on the strength of the evidence means that the more highly recommended practices have more and better scientific support indicating their effectiveness than those practices that are not as highly recommended. Seven of the guidelines published between 2002 and 2007 used five categories: (1) strongly recommended for implementation and strongly supported by well-designed experimental, clinical, or epidemiological studies; (2) strongly recommended for implementation and supported by some experimental, clinical, or epidemiologic studies and a strong theoretical rationale; (3) suggested for implementation by suggestive clinical or epidemiologic studies; (4) additional practices, including federal, state, and other requirements; and (5) not recommended due to insufficient evidence or lack of consensus regarding efficacy. Over 500 practices in these 7 guidelines fall into one of the two strongly recommended categories. Six of the 7 guidelines identify 82 practices that are not recommended, due to a lack of evidence supporting a recommendation. (See table 2.) For example, the 2003 guideline for preventing health-care-associated pneumonia identifies 45 practices that are not recommended. The four guidelines issued between 1981 and 2000 ranked recommended practices into between three and five categories. The 2003 guideline on smallpox vaccine and the 2005 guideline on mycobacterium tuberculosis contain recommended practices, but they are not categorized. In general, CDC took an average of about 3 years to develop each guideline—ranging from less than 1 year to 6 years. CDC officials agreed that the amount of time it took to prepare a guideline has been long. CDC reported that it has been developing one guideline that is still in draft form—the Guideline for Disinfection and Sterilization in Healthcare Facilities—for over 7 years. This guideline has taken a long time to develop, in part, according to CDC officials, because the agency had to coordinate with other agencies involved in the oversight of disinfection and sterilization products. CDC officials said they were working to reduce the time it takes to develop guidelines by issuing shorter and more focused guidelines. CDC officials identified some activities that the agency has undertaken to promote the implementation of the recommended practices in its guidelines. CDC disseminates its infection control guidelines by publishing them in the Morbidity and Mortality Weekly Report, posting them on CDC’s Web site, and distributing training videos. CDC has also provided some funding support to groups that are developing ways to implement selected recommendations in CDC infection control guidelines. For example, through its Prevention Epicenter Program, CDC provided financial support and technical assistance to a study that was assessing the effect of an intervention to prevent catheter-associated BSIs. The researchers reviewed participating hospitals’ policies and procedures on a commonly used catheter, updated them to reflect CDC’s Guidelines for the Prevention of Intravascular Catheter-Related Infections, and implemented an intervention designed to educate staff about the importance of implementing a group of selected recommendations in that guideline. In a similar effort, CDC provided technical support and funding to the Pittsburgh Regional Healthcare Initiative, which reportedly has demonstrated a 68 percent decline in BSIs over a 4-year period among intensive care unit patients. AHRQ officials also reported undertaking some initiatives to promote implementation of practices aimed at reducing HAIs. In 2007, AHRQ issued a report that evaluated several strategies, such as clinician and patient education, for possible use in hospitals to increase implementation of specified infection prevention practices related to catheterization, surgical antibiotic prophylaxis, central lines, and ventilator-associated pneumonia (VAP) interventions. Although researchers were unable to reach any firm conclusions regarding actionable strategies to prevent HAIs, they identified four strategies worth additional study. In addition, through its Accelerating Change and Transformation in Organizations and Networks program, in September 2007, AHRQ funded several studies to improve the implementation of practices that are known to minimize HAIs and to identify the challenges to implementing those practices. The program will implement clinician training at 72 hospitals that is designed to facilitate change in clinician behaviors and habits, care processes, and the safety culture of the participating hospitals. In a document summarizing this initiative, AHRQ acknowledges that the problem is not the lack of knowledge of infection control techniques, but rather the inability to translate the knowledge into social and behavioral changes that can be sustained in health care organizations. While CDC and AHRQ have taken steps to promote the implementation of practices to reduce HAIs, these steps have not been guided by a prioritization of recommended practices. As WHO has indicated in its hand hygiene guideline, when there is a large number of practices it is important to prioritize them. One factor to consider in prioritization is strength of evidence, which CDC has primarily relied on to categorize its recommended practices. However, a 2001 AHRQ study suggested other factors to consider in prioritizing recommended practices. This study rated 79 patient safety practices—including 22 practices that were related to HAIs—on their potential to improve patient safety. The study examined not only strength of the evidence, but also such factors as the potential magnitude of impact of the practice on mitigating patient death or disability, the financial cost of implementing the practice, the complexity of implementing the practice, the organizational and technical obstacles, and the risk that other negative consequences could occur if the practice were put into place. In addition to CDC, AHRQ has reviewed scientific evidence for certain practices related to HAIs, but the efforts of the two agencies have not been coordinated. For example, both agencies independently examined various aspects of the evidence related to improving hand hygiene compliance, such as the selection of hand hygiene products and health care worker education. Although this could have been an opportunity for coordination, an official from the HHS Office of the Secretary told us that no one within the office is responsible for coordinating infection control activities across HHS. The infection control standards that CMS, the Joint Commission, and AOA require as part of the hospital certification and accreditation processes vary in number and content among the organizations, and generally describe the fundamental components of a hospital infection control program, that is, the active prevention, control, and investigation of infections. Examples of standards and corresponding standards interpretations that hospitals must follow include educating hospital personnel about infection control and having infection control policies in place. CMS, the Joint Commission, and AOA standards generally do not require that hospitals implement all recommended practices in CDC’s infection control and prevention guidelines. Only the Joint Commission and AOA have standards that require the implementation of certain practices recommended in CDC’s infection control guidelines. For example, the Joint Commission and AOA require hospitals to annually offer influenza vaccinations to health care workers, which is recommended in CDC’s Influenza Vaccination of Health Care Personnel guideline. CMS, the Joint Commission, and AOA assess compliance with their infection control standards through direct observation of hospital activities and review of hospital policy documents during on-site surveys. CMS, Joint Commission, and AOA standards for hospital certification and accreditation include standards on infection control. In contrast to CDC’s infection control guidelines, which describe clinical practices recommended to reduce HAIs, the CMS, Joint Commission, and AOA standards and their interpretations—which include the performance expectations and explain the standards—describe the fundamental components of a hospital’s infection control program, the overall goal of which is the prevention, control, and investigation of infections. CMS’s infection control COP, the Joint Commission’s chapter on infection control, and AOA’s chapter on infection control have varying numbers of standards, some of which have been updated more recently than others. (See app. II for CMS’s, Joint Commission’s, and AOA’s infection control standards for hospitals.) CMS’s infection control COP contains two standard-level requirements and has not substantially changed since 1986. CMS’s State Operations Manual: Appendix A provides guidance to surveyors in assessing compliance with the COP and explains its intent. CMS issued revised guidance to surveyors for assessing the infection control COP on November 21, 2007, with an immediate effective date. The Joint Commission has 10 infection control standards in the infection control chapter of its manual, the Comprehensive Accreditation Manual for Hospitals: The Official Handbook. The Joint Commission describes its standards as broad, overarching compliance principles. The Joint Commission manual provides hospitals with information about the accreditation process, including how to comply with the 10 standards in the infection control chapter, and presents a rationale for each standard and “elements of performance,” which describe the specific requirements for a hospital to be in compliance with a standard. There are a total of 48 elements of performance associated with the standards in the infection control chapter, ranging from 2 to 8 per standard. In 2006 the Joint Commission began revising its hospital standards, including the infection control standards. These revisions, which the Joint Commission officials described as clarifications to existing standards, will take effect on January 1, 2009. The Joint Commission manual also describes other requirements hospitals must meet to be accredited by the Joint Commission, such as the eight National Patient Safety Goals for 2008, one of which relates to HAIs and requires hospitals to (1) comply with the current WHO hand hygiene guideline or CDC hand hygiene guideline and (2) manage as a “sentinel event” all identified cases of unanticipated death or major permanent loss of function associated with an HAI. AOA has 51 standards in the “Infection Control” chapter of its Accreditation Requirements for Healthcare Facilities manual, which also provides guidance to surveyors in applying AOA’s standards, and these were last updated in 2005. AOA officials also told us they anticipated updating this chapter to reflect CMS’s revised infection control COP guidance. As a whole, the CMS, Joint Commission, and AOA standards and their interpretations describe similar required elements of hospital infection control programs. Similarities include the following: The infection control program is hospitalwide. The hospital designates a person or persons as responsible for the infection control program. The hospital develops policies to control and reduce infections. The hospital educates health care personnel, patients, and family members about infection control. The hospital conducts surveillance activities, which include infection- related data collection and analysis. The hospital evaluates the effectiveness of infection control activities and modifies or updates the infection control program as needed. However, there are also differences between the CMS, Joint Commission, and AOA infection control standards and their interpretations. One example is that the CMS and AOA standards specify that the hospital should maintain a log of infections and communicable diseases detected at the hospital, whereas the Joint Commission has several standards whose elements of performance state that hospitals should collect infection control surveillance data. Another difference is the extent to which the standards and their interpretations require implementation of practices recommended in CDC’s infection control guidelines. The CMS, Joint Commission, and AOA standards generally do not require that hospitals implement all required practices in CDC’s infection control and prevention guidelines. While CMS’s and the accrediting organizations’ standards interpretations make general references to incorporating guidelines into the hospital’s infection control activities, only the Joint Commission and AOA have standards that require the implementation of certain practices recommended in CDC’s infection control guidelines. The CMS standards interpretations have a more general statement that a hospital with a comprehensive hospitalwide infection control program should adopt policies and procedures based as much as possible on national guidelines. For example: As noted previously, a Joint Commission National Patient Safety Goal requires hospitals to implement selected practices in either CDC’s or WHO’s hand hygiene guideline. AOA has a standard on hand washing that requires hospitals to have policies and procedures on practices related to hand decontamination and the prevention of HAIs, some of which are also recommended in CDC’s guidelines, such as the elimination of artificial nails for staff working in intensive care units. The CMS standards interpretations are more general, stating that hospitals should adopt policies and procedures based on national guidelines that, among other things, address the mitigation of risks that contribute to HAIs by, for example, promoting hand washing hygiene among staff and employees, including use of alcohol-based hand sanitizers. Two AOA standards require hospitals to comply with certain practices recommended in CDC’s guidelines that reduce surgical site infections and prevent central venous catheter–related infections. The CMS and Joint Commission standards and their interpretations are not as specific. The CMS standards interpretations state that a hospital with a comprehensive infection control program should adopt policies and procedures that address the mitigation of risk associated with HAIs, including surgery- related infections and device-associated infections. The Joint Commission standards interpretations state that hospitals set goals that include minimizing the risk of transmitting infections associated with the use of procedures, medical equipment, and medical devices and implement methods such as appropriate sterilization techniques to reduce those risks. Both the Joint Commission and AOA standards incorporate recommendations from CDC’s guideline Influenza Vaccination of Health- Care Personnel by requiring hospitals to annually offer influenza vaccinations to health care workers. In contrast, the CMS standards interpretations are more general, stating that hospitals should adopt policies and procedures that address hospital-staff-related issues, such as evaluating hospital staff immunization status for designated infectious diseases, as recommended by CDC and its Advisory Committee on Immunization Practices. During on-site surveys, CMS, Joint Commission, or AOA surveyors assess compliance with their respective infection control standards by directly observing patient care, interviewing hospital staff, and reviewing key infection control documents, such as the hospital’s infection control plan. In addition, the Joint Commission’s surveyors assess compliance with the infection control standards by conducting an infection control system tracer, which is designed to address a hospital’s overall system for detecting and preventing infections. Joint Commission officials noted that they foster compliance with the practices for reducing HAIs by using a “systems-based” approach. Throughout each on-site survey, CMS, the Joint Commission, and AOA surveyors document noncompliance with the standards that they observe. For example, CMS, Joint Commission, and AOA officials told us that surveyors document observations of poor hand hygiene (e.g., a health care worker not washing his or her hands). Based on the results of the surveys, CMS and the accrediting organizations assess a hospital’s compliance with the infection control standards. CMS, Joint Commission, and AOA surveyors are required to cite all instances of noncompliance. At the end of each survey, CMS surveyors review the observations of noncompliance for each standard and determine whether to cite the hospital at the condition level or the standard level based on the nature (i.e., severity) and extent (i.e., prevalence) of the noncompliance. A CMS-surveyed hospital is required to develop a corrective action plan within 10 days of receiving a report documenting the noncompliance found during a survey. The Joint Commission assesses each of the elements of performance that constitute the infection control standards as satisfactory, partially compliant, or insufficient. The entire standard is assessed as not compliant if the hospital has insufficient compliance with any of the corresponding elements of performance or if the hospital is partially compliant with 35 percent or more of the elements of performance. Joint Commission–surveyed hospitals have 45 days from receipt of the survey results to submit a report to the Joint Commission that describes the steps the hospitals took to become compliant with any standards that were assessed as not compliant. The AOA standards are assessed on a scale from 1 to 4, which varies by standard, where 1 indicates full compliance and 4 indicates noncompliance. AOA-surveyed hospitals have 30 days to report to AOA on the steps they took to become compliant with standards assessed as noncompliant that indicate immediate jeopardy or are at the CMS condition level and 60 days to address other standards assessed as noncompliant. Among the surveys conducted in the first quarter of 2007, 12.6 percent of state-agency- surveyed hospitals, 17.6 percent of Joint Commission–surveyed hospitals, and 22.2 percent of AOA-surveyed hospitals were cited as noncompliant with one of the respective organizations’ standards on infection control. Between regular surveys, limited information about compliance with the infection control standards may be identified through validation and complaint surveys of hospitals conducted by state survey agencies. State survey agencies conduct validation surveys for CMS on a small number of Joint Commission–accredited hospitals within 60 days of their last Joint Commission survey and compare the results of the two surveys. For example, in fiscal year 2006, state agencies conducted validation surveys at 67 hospitals. State survey agencies conduct complaint surveys in response to complaints made by patients, family members, or health care providers. In the first quarter of calendar year 2007, state survey agencies conducted 1,119 complaint surveys in 828 hospitals, and infection control deficiencies were found at 3.5 percent of the hospitals. Information about hospital compliance with infection control standards is generally not publicly reported on Web sites, although the Joint Commission reports compliance with its National Patient Safety Goals on its Web site. It reported that in calendar year 2006, 91.2 percent of the hospitals surveyed that year were compliant with the goal related to implementing CDC’s hand hygiene guideline, and 100 percent were compliant with the goal related to managing all identified cases of unanticipated death or major permanent loss of function associated with an HAI as a sentinel event. The rate reported by the Joint Commission in 2006 for adherence to hand hygiene practices was much higher than some studies had reported. For example, in the 2002 Guideline for Hand Hygiene in Health-Care Settings, CDC cited several observational studies of health care workers and reported the average adherence across the studies to be 40 percent. The Joint Commission’s surveyors assess this requirement by interviewing and observing hospital employees and would assess a hospital as noncompliant with the requirement if the surveyors observed noncompliance three or more times. Joint Commission officials acknowledged that their assessment mechanism might not sufficiently measure compliance because hospital staff could be on their best behavior when surveyors were present. Joint Commission officials told us they anticipated publishing in 2008 examples of different ways to measure adherence to hand hygiene as well as tools and training materials that hospitals could use to improve their hand hygiene compliance. Three agencies within HHS—CDC, CMS, and AHRQ—currently collect HAI-related data for a variety of purposes in four separate databases, but each of these databases presents only a partial view of the extent of the HAI problem. Each database focuses its data collection on selected types of HAIs and collects data from a different subset of hospital patients across the country. Although officials from the various HHS agencies discuss HAI data collection with each other, we did not find that the agencies were taking steps to integrate any of the existing data by creating linkages across the databases such as standardizing patient identifiers or other data items. Creating linkages across the HAI-related databases could enhance the availability of information to better understand where and how HAIs occur. Although none of the databases collect data on the incidence of HAIs for a nationally representative sample of hospital patients, CDC officials have produced national estimates of HAIs. However, those estimates derive from assumptions and extrapolations that raise questions about the reliability of those estimates. Three agencies within HHS currently collect HAI-related data in four separate databases, which were created for a variety of purposes. These are the databases associated with CDC’s National Healthcare Safety Network (NHSN), CMS’s Medicare Patient Safety Monitoring System (MPSMS), CMS’s Annual Payment Update (APU) program, and AHRQ’s Healthcare Cost and Utilization Project (HCUP). The most detailed source of information on HAIs within HHS is the NHSN database. CDC established the NHSN database in 2005 to combine the data it had previously collected on HAIs through the National Nosocomial Infections Surveillance (NNIS) system with data from two other related databases. CDC instituted NNIS as a voluntary program in the 1970s to assist hospitals that wanted to monitor their HAI rates. CDC analyzed the data submitted by those hospitals—which tended to be disproportionately large hospitals, many of them academic medical centers—in order to provide the hospitals with a benchmark HAI rate against which to compare their own rates. In addition, CDC drew on these data to publicly report aggregate trends in selected HAIs, and it continues to do that with the data being submitted to the NHSN database. Many of the hospitals that voluntarily participated in the NNIS database have continued to submit HAI data voluntarily to the NHSN database. CDC is working with a number of states implementing mandatory programs for hospitals to submit HAI-related data, using NHSN as the designated mechanism by which hospitals must submit their data. As a result, by the end of December 2007, approximately 1,000 hospitals were enrolled in the NHSN database, some of which continued to participate by choice while others enrolled in the NHSN program because of state mandates. The NHSN program provides hospitals with substantial flexibility to determine the scope of their HAI data collection efforts. Participating hospitals can choose which types of HAIs they will submit data on from among those for which the NHSN program has developed detailed definitions and protocols, including such device-associated infections as central-line-associated BSIs, catheter-associated UTIs, and VAP, as well as procedure-related HAIs such as SSIs and postprocedure pneumonia. Hospitals also choose the specific hospital units (typically different kinds of intensive care units) to monitor for device-associated HAIs and the specific surgical procedures to monitor for SSIs and postprocedure pneumonia. Hospital staff are supposed to follow the detailed definitions and protocols that the NHSN program specifies to identify which patients currently under treatment have developed one of the targeted infections. Hospitals also have to provide at least some HAI data for 6 months of the year to maintain their enrollment in the NHSN program. The MPSMS database provides CMS with information on national trends in the incidence of selected adverse events among hospitalized Medicare beneficiaries, including a number of different types of HAIs. Beginning with hospital discharges from 2002, CMS has collected these data from the medical records selected for annual random samples of approximately 25,000 Medicare inpatients, though the list of specific adverse events monitored has varied over time. A CMS contractor receives copies of these medical records after the patients’ discharge from the hospital, and the contractor’s abstractors follow CMS’s detailed protocols to extract and record specific information on each patient in the sample. These data elements are then entered into algorithms that determine which patients meet CMS’s case selection criteria for experiencing the adverse event and for being at risk for the adverse event. For example, the abstractors would determine which of the sampled patients had a central line catheter inserted during that hospital stay and which of those patients had laboratory reports indicating a BSI not present at admission, which together would allow the calculation of the rate of central-line-associated BSIs. Since 2004, HHS has publicly reported some of the rates of adverse events from the MPSMS database in the National Healthcare Quality Report and National Healthcare Disparity Report, both of which are issued annually by AHRQ. The APU program implemented a financial incentive for hospitals to submit to CMS data that are used to calculate hospital performance on measures of the quality of care they provide. The APU program receives quality-related data from hospitals on a quarterly basis for a range of medical conditions and, in 2007, began to require submission of information on three specific surgical infection prevention measures. Hospitals paid under Medicare’s inpatient prospective payment system receive a higher rate of payment if they submit these quality data that address their performance on recommended care practices. During fiscal year 2008, 3,270 hospitals will receive this higher level of payment, which represents 93 percent of hospitals eligible to participate in the APU program. For patients who underwent specified surgical procedures, hospital staff review their medical records after discharge and, following detailed protocols from CMS, extract and record items of information that relate to three infection prevention practices that are associated with reduced risks of acquiring an SSI: (1) providing antibiotics within 1 hour of the surgery, (2) selecting appropriate antibiotics to prevent surgical infections, and (3) stopping the administration of the antibiotics within 24 hours of the end of the surgery. This information in turn is entered into algorithms that determine what proportion of patients who met CMS’s criteria for designation as eligible for these infection prevention measures actually received them. CMS publicly reports these results for each hospital individually on its Web site, Hospital Compare, along with state and national averages for comparison. AHRQ sponsored the development of the HCUP databases to create a national information resource of patient-level health care data. One of the HCUP databases assembles a sample of patient hospital discharge data from 37 states and converts them to a uniform format that enables the application of AHRQ’s 20 Patient Safety Indicators (PSI)—including two that relate to HAIs—to an approximate national sample of all hospital patients. The two PSIs related to HAIs involve (1) “selected infections due to medical care,” which focuses on infections caused by intravenous lines and catheters, and (2) postoperative sepsis among patients undergoing elective surgery. The PSIs are designed to identify patient safety issues by using the kinds of data that are available in hospital discharge data sets—specifically International Classification of Diseases, Ninth Revision (ICD-9), diagnostic and procedure codes, as well as patient demographics and admission and discharge status—and can be used with the HCUP database without collecting any additional information from patient medical records. However, these indicators are intended to be used as quality improvement tools to highlight aggregate patterns, and so they do not identify specific instances of adverse events with a high degree of precision. AHRQ has posted national estimates for these two indicators—along with the other PSIs—on its Web site, showing the trend from 1994 to 2004. Two HHS agencies collect, or plan to collect, some limited additional information about HAIs in other HHS databases. FDA obtains data on deaths or serious injuries related to the use of medical devices and stores them in the Manufacturer and User Facility Device Experience Database. A small portion of these adverse events may involve HAIs. FDA uses these data to identify devices whose safety warrants closer scrutiny, such as might be warranted for heart valves that were not properly sterilized by the manufacturer. AHRQ is developing a database on adverse events, including HAIs, that will assemble data voluntarily submitted by hospitals to multiple Patient Safety Organizations (PSO). AHRQ officials told us that they planned to disseminate aggregate results derived from the PSOs in an annual report. Each of the four main HHS databases that currently collect information about HAIs presents only a partial view of the extent of the problem. None of them can provide information on the full range of HAIs, because each focuses its data collection on selected types of HAIs (see table 3). In addition, none of the databases can address the frequency of even these selected HAIs for the nation as a whole, because each collects data from different subsets of the nationwide population of hospital patients. Although two databases—NHSN and MPSMS—address many of the same types of HAIs, the former provides information only from selected units of hospitals that participate in the NHSN program (which do not represent hospitals nationwide) while the latter provides information only on a representative sample of Medicare inpatients (i.e., MPSMS does not provide information on non-Medicare patients). The APU program does not collect information on patients with HAIs, but instead tracks the implementation of practices intended to prevent SSIs. The other three databases attempt to identify patients who developed infections as a result of their hospital stay using different data sources and varying approaches. The methods employed by the NHSN, MPSMS, and HCUP databases range from concurrent review of patient care as patients are treated in the hospital, to retrospective review of patient medical records after patients are discharged, to analyses of diagnostic codes recorded electronically in patient billing data. The four databases also apply different sets of procedures to ensure the validity of their data, and each set has its own limitations. For the NHSN program, CDC requires participating hospitals to agree to its detailed instructions for identifying patients with HAIs, but CDC currently has no process in place to check how thoroughly and consistently those instructions are followed. For the MPSMS program, CMS relies on internal procedures performed by a contractor that collects the data to routinely monitor the interrater reliability of its abstractors. However, CMS has not assessed the completeness or accuracy of the information in patient medical records that the MPSMS database measures rely on and how that might affect the HAI rates reported by the MPSMS program. CMS requires hospitals that submit APU data to have a small sample of their cases checked each quarter by a CMS contractor. The contractor assesses the accuracy with which the hospital abstracted its APU data from patient medical records. AHRQ’s HCUP database relies on ICD-9 codes filed with patient bills. Many hospitals have their ICD-9 coding periodically checked by outside auditors, but the reason is to determine accuracy for billing purposes, not whether patients experienced HAIs. Among the four databases, NHSN collects the most clinically detailed information about HAIs, but those data nonetheless have important limitations. Among the strengths of the NHSN database is that it presents detailed information on HAI rates across different types of hospital units and multiple types of HAIs. Moreover, its procedures for identifying patients with HAIs draw on the wider range of clinical information available while patients are still in the hospital, as opposed to retrospective reviews of patient medical records after discharge. On the other hand, the NHSN database is much more limited than any of the other databases in terms of the patient population that it represents. Because the hospitals that submit data either do so by choice or, for a limited number of states, by mandate, this group of hospitals is not representative of hospitals nationwide, as a random sample would be. In addition, the data these hospitals supply do not reflect the experience of many of their patients. For example, the hospitals that participate in the NHSN program report device-related HAIs such as central-line-associated BSIs and VAP for selected hospital units such as different types of intensive care units (e.g., coronary, burn, surgical, medical). In addition, most of the hospitals that participate in the NHSN program report procedure-based HAIs such as SSIs and postprocedure pneumonia for a relatively small number of specific procedures. For example, during March 2007, 225 hospitals reported SSIs for colon surgery and 133 did so for coronary bypass surgery, but only 11 hospitals reported SSIs for appendix surgery and 10 for gallbladder surgery. Although officials from the various HHS agencies discuss HAI data collection with each other, we did not find that the agencies were taking steps to integrate any of the existing data from the four databases that collect HAI-related data. This integration could involve creating linkages between existing data by, for example, creating common patient identifiers in the different databases so that data on the same individuals found in multiple databases could be pulled together, or creating “crosswalks” that could specify in detail how related data fields in the various databases are similar or different. We found that the most extensive exchange of information across the three HHS agencies that collect HAI data occurred through the participation of their representatives in HICPAC. HICPAC generally holds 2-day meetings three times per year, and at each meeting the members from the participating HHS agencies typically provide a summary of their HAI-related activities. Our review of HICPAC minutes from 2004 through 2007 identified numerous instances of officials describing what their own agency was doing to collect HAI data, but we did not find in the HICPAC meeting minutes any evidence that the agencies had taken action to create greater compatibility among the databases or to address gaps in information across the databases. Outside of HICPAC meetings, HHS officials provided other examples of communication and outreach among HHS agencies taking place in relation to various databases. For example, the MPSMS program has a technical expert panel that includes representatives from CDC and AHRQ. Similarly, CMS, CDC, and AHRQ are represented on the steering committee for the public-private Surgical Care Improvement Project (SCIP), which developed the HAI-related measures used in the APU program. These group discussions allow agency officials to discuss and explain their different approaches for collecting HAI data, but the focus of these meetings is still on the individual database, rather than on creating linkages from one database to another. Creating mechanisms for linking data across the HAI-related databases could enhance the availability of information to better understand where and how HAIs occur. A case in point concerns information collected by two of the databases on surgical-related HAIs. Approximately 500 hospitals already submit data to APU on surgical processes of care and to NHSN on surgical infection rates for some of the same patients, but these data are not currently linked. As a consequence, the potential benefit of using the existing data to monitor the extent to which compliance with the recommended surgical care processes leads to actual improvements in surgical infection rates has not been realized. Officials at CDC reported that they approached CMS about developing mechanisms for linking NHSN data with APU data. To do this, CDC officials suggested that CDC and CMS agree to collect uniform patient identifiers. Officials at CMS reported that although they recognized the potential benefits of linking the APU data with the data in related HHS databases, CMS is currently focused on managing the expansion of the APU program. HHS cannot use its HAI-related databases to produce reliable national estimates of HAI rates, even for the selected types of HAIs monitored, because none of the databases collect data on the incidence of HAIs for a nationally representative sample of hospital patients. Two of the databases—APU and HCUP—come close to covering a national population for selected HAIs, but the APU database collects data on practices intended to prevent HAIs among surgery patients, not on the number of HAIs that occur. In addition, although the information in HCUP relates to the incidence of some HAIs, its reliance on diagnostic codes recorded in claims data substantially reduces the reliability of that information. The other two databases—NHSN and MPSMS—collect clinical data on the incidence of selected HAIs, but their data do not derive from a representative sample of the national hospital patient population because NHSN is limited to selected units of participating hospitals that do not represent hospitals nationwide and MPSMS is limited to Medicare patients. (See table 3.) Recent concerns about the magnitude of HAIs caused by the drug-resistant pathogen MRSA have further highlighted limitations in HHS’s databases for estimating HAI rates. In June 2007, APIC, the professional association for infection control professionals, released the results of a survey it conducted that showed that 46 of every 1,000 patients in those hospitals had tested positive for MRSA. This was a much higher rate than had previously been estimated by clinicians. The NHSN database has some information about the frequency of MRSA infections, as well as other MDROs, but this information is limited to the subset of patients for whom each hospital submits data, based on the particular hospital units, infection types, and procedures that it has chosen to report to NHSN. Thus, the NHSN database does not provide information on the overall proportion of patients in a given hospital who were found to have a MRSA infection. The MPSMS program has begun to collect, but has not yet reported, data on the incidence of hospital-acquired MRSA infections within the Medicare inpatient population. However, a CMS official responsible for the program acknowledged that the ability of the MPSMS program to detect patients with MRSA infections is limited by its reliance on retrospective review of patients’ medical records. The varying content and methods used to collect and report data on HAIs for HHS’s four databases also preclude HHS from combining data from the databases to produce reliable estimates on either selected HAIs or an overall HAI rate. Even the databases that collect data on the same types of HAIs calculate and report rates in different ways that cannot be reconciled. For example, the MPSMS program reported that 1.7 percent of all the Medicare patients that had a central line inserted in 2004 experienced a central-line-associated BSI. In contrast, the NHSN program reported the mean number of central-line-associated BSIs detected during 2006 by different types of intensive care units, calculated as the number of infections per 1,000 days of central line use. This ranged from 1.5 per 1,000 days in inpatient medical/surgical wards to 6.8 per 1,000 days in burn intensive care units. HHS might be able to develop approaches for linking data across its different databases, such as by developing common data collection methods and specifications or creating crosswalks between the specifications for different databases. However, until that is done, the information on HAI rates from each of the three databases collecting that information stands alone. CDC officials have produced national estimates of HAIs, but those estimates derive from assumptions and extrapolations that raise questions about the reliability of those estimates. Most recently, in 2007, CDC officials published estimates of the aggregate incidence of HAIs and deaths attributable to HAIs in 2002—which included an estimate of 99,000 HAI- related deaths per year. These estimates rested on two key assumptions. The first assumption was that data from 283 hospitals reporting to the NNIS program (the predecessor program to NHSN) were indicative of hospital rates nationwide, even though the authors acknowledged that the NNIS hospitals were not randomly selected and their rates could differ from those of U.S. acute care hospitals as a whole. The second assumption was that 2002 NNIS data on SSIs could be used to estimate rates for all other types of HAIs, based on the relative frequency of SSIs compared to other types of HAIs observed in a portion of NNIS hospitals during the 1990s. In 2004, CDC officials announced plans for conducting a national survey designed to collect more up-to-date data on hospitalwide incidence of all types of HAIs in a sample of hospital discharges, but they subsequently decided not to proceed with those plans. CDC officials told us they were developing plans to obtain similar data by adding questions on HAIs to the National Hospital Discharge Survey conducted by CDC’s National Center for Health Statistics. CDC officials said they planned to put questions about HAIs into the National Hospital Discharge Survey starting in 2010. However, CDC officials stated that they planned first to pilot test several different approaches for collecting HAI data through the National Hospital Discharge Survey, and it was too early to say what specific information they would collect through this process. HAIs in hospitals can cause needless suffering and death. Federal authorities and private organizations have undertaken a number of activities to address this serious problem; however, to date, these activities have not gained sufficient traction to be effective. Current activities at the federal level include guidelines with recommended practices issued by CDC, required standards for hospitals set by CMS, and HAI-related data collected through multiple HHS databases. Private-sector organizations, such as the Joint Commission and AOA, have also set infection control standards for hospitals. With the passage of the DRA by the Congress, hospitals will be encouraged to reduce certain HAIs, because beginning in October 2008 CMS will stop paying hospitals higher payments for patients that acquire them. We identified two possible reasons for the lack of effective actions to control HAIs to date. First, although CDC’s guidelines are an important source for its recommended practices on how to reduce HAIs, the large number of recommended practices and lack of department-level prioritization have hindered efforts to promote their implementation. The guidelines we reviewed contain almost 1,200 recommended practices for hospitals, including over 500 that are strongly recommended—a large number for a hospital trying to implement them. A few of these are required by CMS’s or accrediting organizations’ standards or their standards interpretations, but it is not reasonable to expect CMS or accrediting organizations to require additional practices without a prioritization. Although CDC has categorized the practices on the basis of the strength of the scientific evidence, there are other factors to consider in developing priorities. For example, work by AHRQ suggests factors such as costs or organizational obstacles that could be considered. The lack of coordinated prioritization may have resulted in duplication of effort by CDC and AHRQ in their reviews of scientific evidence on HAI- related practices. Second, HHS has not effectively used the HAI-related data it has collected through multiple databases across the department to provide a complete picture about the extent of the problem. Limitations in the databases, such as nonrepresentative samples, hinder HHS’s ability to produce reliable national estimates on the frequency of different types of HAIs. In addition, currently collected data on HAIs are not being combined to maximize their utility. For example, data on surgical infection rates and data on surgical processes of care are collected for some of the same patients in two different databases that are not linked. HHS has made efforts to use the currently collected data to understand the extent of the problem of HAIs, but the lack of linkages across the various databases results in a lost opportunity to gain a better grasp of the problem of HAIs. HHS has multiple methods to influence hospitals to take more aggressive action to control or prevent HAIs, including issuing guidelines with recommended practices, requiring hospitals to comply with certain standards, releasing data to expand information about the nature of the problem, and soon, using hospital payment methods to encourage the reduction of HAIs. Prioritization of CDC’s many recommended practices can help guide their implementation, and better use of currently collected data on HAIs could help HHS—and hospitals themselves—monitor efforts to reduce HAIs. Unfortunately, leadership from the Secretary of HHS is currently lacking to do this. Without such leadership, the department is unlikely to be able to effectively leverage its various methods to have a significant effect on the suffering and death caused by HAIs. In order to help reduce HAIs in hospitals, the Secretary of HHS should take the following two actions: 1. Identify priorities among CDC’s recommended practices and determine how to promote implementation of the prioritized practices, including whether to incorporate selected practices into CMS’s conditions of participation (COP) for hospitals. 2. Establish greater consistency and compatibility of the data collected across HHS on HAIs to increase information available about HAIs, including reliable national estimates of the major types of HAIs. We obtained written comments on our draft report from HHS, which appear in appendix III. HHS generally agreed with our recommendations and noted its appreciation for our efforts in developing this report. The comments addressed both of our recommendations. In terms of our first recommendation, HHS’s comments indicated that CMS welcomed the opportunity to work with CDC to review and prioritize recommendations for infection control and would consider whether to incorporate some of the recommendations into CMS’s hospital COPs. HHS stated that COPs represent minimum health and safety requirements and the two standards in the infection control COP have a broad reach for assessing a hospital’s infection control program. HHS’s comments also noted that the COPs currently lack the specificity of guidance and recommendations issued by HHS agencies, including CDC’s recommendations for infection control. In terms of our second recommendation, HHS’s comments acknowledged the need for greater consistency and compatibility of data collected on HAIs and identified three actions CMS would take. First, CMS will work with other HHS agencies to evaluate opportunities for consolidating and coordinating national data collection programs. Second, CMS will implement consensus-based measures whenever possible. Third, CMS will require the collection of data that facilitate linkages between databases, including Medicare beneficiary and hospital patient identifiers in the APU program. HHS’s comments also noted that CDC has recently begun moving toward greater alignment with CMS. HHS’s comments also noted other activities under way that the department believes would improve the collection of HAI-related data. For example, as part of implementing section 5001(c) of the DRA, hospitals are required to begin reporting “present on admission” data—diagnoses that are present in patients at the time of admission—in order to determine whether the selected preventable conditions were acquired prior to the hospitalization. We noted this activity in the report, and we believe that it is too early to know the extent of information that will be generated on HAIs or how it will be used by HHS agencies. HHS’s comments also indicated that CMS is evaluating an update to the diagnostic and procedure coding system, which could offer clearer and more detailed information than the current system, and also noted the benefits of employing industry data standards for electronic health care data exchanges to facilitate reporting of HAI-related data to both CDC and CMS. In our report, we did not assess the effect of these activities because they have not been implemented. We also obtained comments on a draft of this report from representatives of the Joint Commission and AOA. The Joint Commission concurred with our findings that it would be beneficial to have more accurate estimates of HAIs and that prioritization of practices to guide actions in preventing HAIs is a valuable and necessary undertaking. However, it noted that other actions, such as cultural changes in health care organizations, clear strategies for implementation, and a concerted, multifaceted effort by many stakeholders, are needed to reduce HAIs. We agree that such actions are important in reducing HAIs, and that better prioritization of the many recommended practices would facilitate the process the Joint Commission describes. The Joint Commission also provided two comments related to the section of the report that discusses hospital infection control standards. First, it commented that our report places too great a focus on the number of standards, and pointed out the benefit of the Joint Commission’s systems-based approach. It expressed a concern that a reader could perceive that the Joint Commission has fewer expectations for hospitals than CMS or AOA. That was not our intention, and we have modified the report to note the Joint Commission’s systems-based approach to foster compliance with practices to reduce HAIs. Second, the Joint Commission said that the report indicates that their standards are less specific in that they have not adopted certain CDC recommendations, but they noted that many of the CDC guidelines cannot be implemented without additional research or translation into concrete, actionable steps. In the draft, we described some activities being undertaken by CDC and AHRQ to promote implementation of recommended practices to reduce HAIs, including studies funded by AHRQ, and we added a clarification to the text to note the importance of translating knowledge into social and behavioral changes that can be sustained. Furthermore, we believe that clearer prioritization can help efforts to promote the implementation of practices to reduce HAIs. HHS, the Joint Commission, and AOA provided technical comments, which we incorporated as appropriate. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after its issuance date. At that time, we will send copies of this report to the Secretary of HHS and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. In addition to developing infection control and prevention guidelines and recommendations, the Centers for Disease Control and Prevention (CDC) provides leadership in outbreak investigations, surveillance, and laboratory research and prevention of health-care-associated infections (HAI). According to officials, CDC’s work in the area of outbreak investigations has led to new knowledge on ways to prevent HAIs. For example, in 2006, CDC investigated an outbreak of eye inflammation that was occurring in patients who recently had cataract surgery at a hospital in Maine. The outcome of this investigation led to the development of recommended practices for cleaning and sterilizing intraocular surgical instruments developed by the American Society of Cataract and Refractive Surgery and the American Society of Ophthalmic Registered Nurses. CDC’s surveillance, research, and demonstration projects measure the effect of HAIs, adverse drug events, and other complications of health care. CDC has funded many activities through its Prevention Epicenter Program, which began in 1997 and is devoted to improving the detection, reporting, and prevention of HAIs, antimicrobial resistance, and other adverse events in health care. For example, CDC funded a multicenter trial research project and found that daily bathing with chlorhexidine, an antiseptic, reduces the incidence of methicillin-resistant Staphylococcus aureus (MRSA), vancomycin-resistant enterococci (VRE), and bloodstream infection (BSI). In addition, CDC has collaborated with three public hospitals in Chicago to develop a clinical data warehouse using the hospitals’ information systems, which enabled the hospitals to develop a series of quality improvement strategies to decrease antimicrobial resistance and improve antibiotic prescribing and infection control practices. Finally, CDC provides direct support and assistance to external groups involved in many HAI prevention activities. CDC has funded and collaborated with the Pittsburgh Veterans Affairs Medical Center to reduce MRSA infections by more than 60 percent in its health care units. The success of this project has led CDC and the Department of Veterans Affairs to initiate similar efforts across all VA hospitals. In addition, CDC is represented on the Surgical Care Improvement Project (SCIP) steering committee. SCIP is a national public-private partnership to reduce surgical complications that is sponsored by the Centers for Medicare & Medicaid Services. CDC told us that they have worked with SCIP to develop quality measures and market the project. Finally, CDC has provided technical assistance to the Institute for Healthcare Improvement, a not-for-profit organization working to improve global health care, in the development of the institute’s hand hygiene “bundle” and MRSA infection prevention “bundle” guides. The conditions of participation (COP) for hospitals, including the infection control COP as well as the survey protocols and interpretive guidelines that accompany the COPs, are contained in Appendix A of CMS’s State Operations Manual. CMS issued revised interpretive guidelines for the infection control COP on November 21, 2007. The hospital must provide a sanitary environment to avoid sources and transmission of infections and communicable diseases. There must be an active program for the prevention, control, and investigation of infections and communicable diseases. (a) Standard: Organization and policies. A person or persons must be designated as infection control officer or officers to develop and implement policies governing control of infections and communicable diseases. (1) The infection control officer or officers must develop a system for identifying, reporting, investigating, and controlling infections and communicable diseases of patients and personnel. (2) The infection control officer or officers must maintain a log of incidents related to infections and communicable diseases. (b) Standard: Responsibilities of chief executive officer, medical staff, and director of nursing services. The chief executive officer, the medical staff, and the director of nursing (1) Ensure that the hospital-wide quality assurance program and training programs address problems identified by the infection control officer or officers; (2) Be responsible for the implementation of successful corrective action plans in affected problem areas. In addition, CMS officials said that the quality assessment and performance improvement COP, which can be found at 42 C.F.R. § 482.21 (2007), can also affect infection control. In addition to the contact named above, key contributors to this report were Linda T. Kohn, Assistant Director; Donald Brown; Shaunessye Curry; Shannon Slawter Legeer; Eric Peterson; Roseanne Price; and Keisha Wilkerson.
According to the Centers for Disease Control and Prevention (CDC), health-care-associated infections (HAI) are estimated to be 1 of the top 10 causes of death in the United States. HAIs are infections that patients acquire while receiving treatment for other conditions. GAO was asked to examine (1) CDC's guidelines for hospitals to reduce or prevent HAIs and what the Department of Health and Human Services (HHS) does to promote their implementation, (2) Centers for Medicare & Medicaid Services' (CMS) and hospital accrediting organizations' required standards for hospitals to reduce or prevent HAIs and how compliance is assessed, and (3) HHS programs that collect data related to HAIs and integration of the data across HHS. GAO reviewed documents and interviewed officials from CDC, CMS, the Agency for Healthcare Research and Quality (AHRQ), and accrediting organizations. CDC has 13 guidelines for hospitals on infection control and prevention, which cover a variety of topics, and in these guidelines CDC recommends almost 1,200 practices for implementation to prevent HAIs and related adverse events. Most of the practices are sorted into five categories--from strongly recommended for implementation to not recommended--primarily on the basis of the strength of the scientific evidence for each practice. Over 500 practices are strongly recommended. CDC and AHRQ have conducted some activities to promote implementation of recommended practices, but these activities are not based on a clear prioritization of the practices. Prioritization may consider not only the strength of the evidence, but also other factors that can affect implementation, such as cost and organizational obstacles. In addition to CDC, AHRQ has reviewed scientific evidence for certain HAI-related practices, but the efforts of the two agencies have not been coordinated. The infection control standards required by CMS and hospital-accrediting organizations--the Joint Commission and the Healthcare Facilities Accreditation Program of the American Osteopathic Association (AOA)--describe the fundamental components of a hospital's infection control program. These components include the active prevention, control, and investigation of infections. The standards are far fewer in number than the recommended practices in CDC's guidelines and generally do not require that hospitals implement all recommended practices in CDC's infection control and prevention guidelines. CMS, the Joint Commission, and AOA assess compliance with their infection control standards through direct observation of hospital activities and review hospital policy documents during on-site surveys. Multiple HHS programs collect data on HAIs, but limitations in the scope of information they collect and a lack of integration across the databases maintained by these separate programs constrain the utility of the data. Three agencies within HHS currently collect HAI-related data for a variety of purposes in databases maintained by four separate programs: CDC's National Healthcare Safety Network program, CMS's Medicare Patient Safety Monitoring System, CMS's Annual Payment Update program, and AHRQ's Healthcare Cost and Utilization Project. Each of the four databases presents only a partial view of the extent of the HAI problem because each focuses its data collection on selected types of HAIs and collects data from a different subset of hospital patients across the country. GAO did not find that the agencies were taking steps to integrate data across the four databases by creating linkages across the databases, such as creating common patient identifiers. Creating linkages across the HAI-related databases could enhance the availability of information to better understand where and how HAIs occur. Although CDC officials have produced national estimates of HAIs, those estimates derive from assumptions and extrapolations that raise questions about the reliability of those estimates.
RHS administers a direct SFH loan program to help low-income individuals or households purchase homes in rural areas. As of September 30, 2000—the most recent fiscal year end for which agency-certified reporting exists for Agriculture—RHS reported having about $17 billion outstanding in direct SFH loans. As shown in table 1, RHS reported $383 million of direct SFH loans over 180 days delinquent, including debts classified as Currently Not Collectible (CNC) on its Treasury Report on Receivables Due From the Public (TROR) as of September 30, 2000. RHS excluded $182 million of this delinquent debt from referral to FMS for TOP and cross-servicing. In addition, RHS had not referred any debts to FMS for cross-servicing as of September 30, 2000, based, in part, on an exemption proposal which RHS stated, in its TROR as of the same date, had been approved by Treasury. However, Treasury officials told us that Treasury never approved a proposal to exempt RHS loans from cross- servicing. Accordingly, opportunities to collect these loans through Treasury’s cross-servicing program are being missed. DCIA requires federal agencies to refer all legally enforceable and eligible non-tax debts that are more than 180 days delinquent to Treasury for collection through administrative offset and cross-servicing. We found that RHS did not maintain supporting documentation for direct SFH loans it excluded from such referral as of September 30, 2000. Consequently, we were not able to determine whether the agency’s exclusion of $182 million of delinquent debt was based on relevant legislative and regulatory criteria. FMS officials told us that it is their expectation that agencies would retain the applicable data needed to justify not referring delinquent debt for collection action. Further, the Comptroller General’s Standards for Internal Controls in the Federal Government states that all transactions and other significant events need to be clearly documented and that the documentation should be readily available for examination. According to RHS officials, since implementing a new automated centralized loan servicing system in fiscal year 1997, RHS has been unable to readily identify direct SFH loans that are eligible for referral to FMS for cross-servicing. Essentially, the system does not contain sufficient data to differentiate loans eligible for cross-servicing from those that are not. Although RHS plans system enhancements for the third quarter of fiscal year 2002, which the agency believes will facilitate loan identification for cross-servicing, RHS officials advised us that relatively few referrals to FMS will likely be made in the near term. While we were performing our fieldwork, RHS began an interim process to manually identify such loans eligible for cross-servicing. According to RHS’ debt referral plan, because the interim process is tedious and labor intensive, only about 100 to 200 loans were to be referred per month to Treasury, beginning in May 2001. RHS officials said that all direct SFH loans eligible for TOP will have to be reviewed for cross-servicing eligibility. RHS reported 23,032 direct SFH loans eligible for TOP as of September 30, 2000. The agency intends to refer about 30 percent of eligible direct SFH loans for cross-servicing in fiscal year 2002. According to RHS officials, nothing had been done prior to our review to manually identify delinquent direct SFH loans for referral to FMS for cross-servicing because the agency had requested a Treasury exemption from cross-servicing for direct loans made under the SFH loan program. RHS had requested that it be allowed to continue to internally service the loans for up to 1 year after liquidation of the collateral, which, in some cases, could be years after the loans became delinquent. Treasury officials told us that Treasury had not approved the request, either formally or informally, and stated that Treasury discouraged RHS from making the request, which was not submitted to Treasury until November 2000. Treasury formally denied RHS’ exemption request for the direct SFH loan program on May 14, 2001. The declination was based, in part, on the fact that similar loans were being referred for cross-servicing by other agencies and RHS had not identified any new or unique collection tools applicable to direct SFH loans. When a debtor becomes delinquent 91 days on an installment payment for a direct SFH loan, RHS notifies the debtor via certified mail that the entire debt balance is accelerated and is due and payable. As shown in table 1, RHS reported $201 million of direct SFH loans as eligible for TOP as of September 30, 2000. However, this amount may have been understated by about $348 million because it only included the delinquent installment portion of the loans. According to FMS, the entire accelerated balance of the debt should be reported as delinquent and, absent any exclusions allowed by DCIA or Treasury, should be reported as eligible for referral to FMS for collection as well. FSA provides, among other things, temporary credit to farmers and ranchers who are high-risk borrowers and are unable to obtain commercial credit at reasonable rates and terms. FSA reported having about $8.7 billion in direct farm loans as of September 30, 2000, and as shown in table 2, the agency reported about $1.7 billion of direct farm loans over 180 days delinquent, including debts in CNC status as of September 30, 2000. FSA excluded substantial amounts of this debt from referral to FMS for TOP and cross-servicing. In addition, FSA officials told us that only $38 million was referred to FMS for cross-servicing as of September 30, 2000, because FSA suspended all cross-servicing referrals in April 2000 pending development and implementation of new cross-servicing guidelines for the agency. FSA did not have a process or sufficient controls in place to adequately identify direct farm loans eligible for referral to FMS. Certain types of debts were automatically excluded from referral without any review for eligibility. In other cases, FSA’s Program Loan Accounting System did not contain information from the detailed loan files located at the FSA field offices that would be key to determining eligibility for referral. In addition, FSA did not have any monitoring or review procedures in place to help ensure that FSA personnel routinely updated the detailed debt files. Consequently, amounts of direct farm loans FSA reported to Treasury as eligible for referral were not accurate. Excluded amounts for bankruptcy, forbearance/appeals, foreclosure, and Department of Justice (DOJ)/litigation totaled about $694 million, or about 95 percent of the $732 million that was excluded from referral to FMS for TOP and cross-servicing. Of this amount, $295 million was for DOJ/litigation and was comprised of judgment debts. According to FSA officials, deficiency judgments—court judgments requiring payment of a sum certain to the United States—are eligible for TOP and should be referred to FMS. However, FSA’s Finance Office in St. Louis automatically excluded all judgment debts for direct farm loans from referral to FMS because automated system limitations precluded staff from identifying deficiency judgments. Our inquiries caused FSA officials to initiate a special project in May 2001 to identify all deficiency judgment debts for direct farm loans so that such debts could be referred to FMS. Determinations as to whether direct farm loans are in bankruptcy, forbearance/appeals, or foreclosure and, therefore, excluded from referral to FMS, are made by FSA personnel in numerous FSA field offices across the country. Personnel in the FSA field offices we visited did not routinely update the eligibility status of farm loans in FSA’s Program Loan Accounting System, as was evident by the selected excluded loans we reviewed. Using statistical sampling, we selected and reviewed supporting documents to determine whether farm loans that selected FSA field offices in California, Louisiana, Oklahoma, and Texas had excluded from referral to FMS were consistent with established criteria dealing with bankruptcy, forbearance/appeals, foreclosure, and DOJ/litigation. Based on the results of our sample, we estimate that about 575, or approximately one-half of the excluded loans in the four selected states, had been inappropriately placed in exclusion categories by FSA as of September 30, 2000. Because of these numerous errors, we did not test other reported exclusions from referral to FMS for cross-servicing, such as loans being internally offset. One of the most frequently identified inappropriate exclusions pertained to amounts discharged in bankruptcy, which should not have been included in delinquent debt. Fifty-two bankruptcies that we reviewed as part of our sample had been discharged in bankruptcy court prior to September 30, 2000. In fact, many had been discharged several years prior to that date. For example, one loan with a balance due of about $325,000 was reported as a delinquent debt over 180 days and excluded from referral requirements because of bankruptcy. However, a review of the loan file at the FSA field office showed that a bankruptcy court discharged the debt in 1986 and, therefore, the debt should not have been included in either the delinquent debt or exclusion amounts reported to Treasury as of September 30, 2000. According to Farm Loan Managers in some of the FSA field offices we visited, they have not written off many direct farm loans discharged in bankruptcy because making new loans has been a higher-priority use of their resources. In addition, FSA did not provide sufficient oversight to help ensure that field office personnel adequately tracked the status of discharged bankruptcies and updated the loan files and debt records in the Program Loan Accounting System. Also, it is important to note that delays in promptly writing off discharged bankruptcies not only distort the TROR for debt management and credit policy purposes, but also distort key financial indicators such as receivables, total delinquencies, and loan loss data. This makes the information misleading for budget and management decisions and oversight. Aside from erroneously inflating reported receivables and delinquent loans, failure to process loan write-offs delays reporting closed-out debt amounts to the Internal Revenue Service as income to the debtor. As previously mentioned, only $38 million of direct farm loans were reported by FSA as having been referred for cross-servicing because the agency suspended such referrals in April 2000 pending development and implementation of a new policy to refer to FMS for cross-servicing only debts where the 6-year statute of limitations has not expired. FSA issued revised guidelines in July 2001 to incorporate the 6-year statute of limitations, and the agency is now reviewing loans at over 1,000 FSA field offices to determine eligibility for referral to Treasury under the new policy. According to an Agriculture official, the first referral to FMS under this new policy was made in September 2001. According to FSA officials, FSA decided to adopt the new policy because it believed that FMS informed them that accounts for which the 6-year statute of limitations had expired should not be referred for cross- servicing. However, FMS officials told us that FMS had not provided such guidance to FSA. FMS officials emphasized that FMS will accept debts that are older than 6 years because, although the debts cannot be referred to DOJ for litigation, collection can still be attempted through other debt collection tools such as referral to private collection agencies. Even though FSA reported having referred $934 million of direct farm loans to FMS for TOP as of September 30, 2000, the agency has lost and continues to lose opportunities for maximizing collections on this debt because it does not refer co-debtors. According to FSA officials, the vast majority of direct farm loans have co-debtors, who are also liable for loan repayment. However, FSA’s automated loan system cannot record more than one debtor because the system modifications necessary to accept Taxpayer Identification Numbers (TINs) for multiple debtors have not been made. According to an FSA official, the need to have co-debtor information in the system to facilitate debt collection was initially determined in 1986. However, we were told that to date, higher-priority systems projects have precluded FSA from completing the necessary systems enhancements to allow the system to accept more than one TIN per debt. In other words, although FSA recognized years ago the need to take action, the agency has not considered this to be a high enough priority. According to FSA officials, FSA has now incorporated this requirement in the new Farm Loan Program Information Delivery System scheduled for implementation in fiscal year 2005. According to data provided by FSA officials, about $400 million of new delinquent debt became eligible for TOP during calendar year 2000. Although FSA officials stated that the debts became eligible relatively evenly throughout the year, debts eligible for TOP are referred by FSA only once annually, during December. Consequently, a large portion of the $400 million of debt likely was not promptly referred when it became eligible. As we have previously testified, industry statistics have shown that the likelihood of recovering amounts owed decreases dramatically with the age of delinquency of the debt. Thus, the old adage that “time is money” is very relevant for referrals of debts to FMS for collection action. FSA officials told us that the agency agrees that quarterly referrals could enhance possible collection of delinquent debts by getting them to Treasury earlier and has plans to start a quarterly referral process in fiscal year 2003. Since DCIA was enacted in April 1996, RHS and FSA have also missed opportunities to potentially collect millions of dollars related to losses on guaranteed loans. As of September 30, 2000, neither RHS nor FSA treated such losses resulting from the SFH program and the Farm Loan Program, respectively, as non-tax federal debts. Consequently, neither agency had policies and procedures in place to refer such losses to Treasury for collection through FMS’ TOP or cross-servicing programs. According to RHS and FSA officials and reports provided by the agencies, guaranteed SFH loans and farm loans, as well as related losses, have been significant since the inception of the guaranteed programs. The RHS guaranteed SFH program has been expanding in recent years. The outstanding principal due on the guaranteed SFH portfolio grew from about $3 billion in fiscal year 1996 to over $10 billion as of September 30, 2000. Through September 30, 2000, RHS had paid out losses of about $132 million on the guaranteed SFH program since fiscal year 1996. The outstanding principal due on guaranteed farm loans was about $8 billion as of September 30, 2000. Through September 30, 2000, FSA had paid out about $293 million in losses since fiscal year 1996. In January 1999 and June 2000, Agriculture’s Office of Inspector General (OIG) first reported that RHS’ and FSA’s guaranteed losses, respectively, were not being referred to Treasury for collection. The OIG recommended that both agencies recognize the losses as federal debt and begin referring such debt to FMS for collection. Although RHS has recently initiated action to begin developing policies for referring losses on guaranteed loans to FMS for collection action in the future, its efforts to make necessary regulatory and policy changes have not been fully completed, resulting in continuing missed opportunities to potentially collect losses on guaranteed loans. FSA, on the other hand, has recently initiated action to begin implementing new policies for referring losses on all new guaranteed loans to FMS for collection action. Because these guaranteed loan programs are significant to RHS and FSA, the agencies’ development and implementation of policies and procedures to promptly refer eligible amounts to Treasury for collection action are critical. DCIA authorizes both federal agencies that administer programs that give rise to delinquent non-tax debts and federal agencies that pursue recovery of such debts, such as FMS, to administratively garnish up to 15 percent of a debtor’s disposable pay until the debt is fully recovered. Agriculture and the other eight CFO Act agencies we surveyed had not yet used AWG as authorized by DCIA to collect delinquent non-tax debt as of the date of completion of our fieldwork, over 5 years after DCIA went into effect. Eight of these nine agencies, including Agriculture, have expressed the intent to implement AWG to varying degrees over the next 5 years. Given the possible added collection leverage afforded through the availability and use of AWG, timely implementation would seem prudent. As of September 30, 2000, the eight agencies we surveyed that intend to implement AWG reported holding a total of about $23 billion in consumer debt, which typically consists of debts by individuals, many of whom are employed. This is not to imply that AWG could be used to collect all such consumer debt because circumstances such as bankruptcy or appeals could limit the application of this debt collection tool. Agencies, including Agriculture, identified various reasons for the delay in implementing AWG, including the need to focus priorities on the mandatory provisions of DCIA and develop the required regulations or administrative hearing procedures to implement AWG. This is disappointing in light of the large population in the country’s labor force and the fact that debt collection experts testified before this Subcommittee in 1995, prior to the enactment of DCIA, that AWG can be an extremely powerful debt collection tool, as the mere threat of AWG is often enough to motivate debtor repayment. In responding to our survey, Agriculture said it would rely exclusively on FMS to implement AWG as part of cross-servicing, including identifying the debtors’ employers and sending notices and garnishment orders. At the time of the completion of our fieldwork, Agriculture had not established specific dates for implementing AWG and was among the five surveyed agencies intending to implement AWG that did not have a written implementation plan. Agriculture subsequently stated that it planned to implement AWG during fiscal year 2002. Given the extent of agency or contractor effort needed to carefully administer such processes, we believe agencies will need fairly detailed implementation plans. These plans should include a clear description of and strategy for how the agency will actually perform AWG and when AWG will be fully implemented. The plans should cover the types of debts subject to AWG and the policies and procedures for administering AWG. Also, agencies should identify the processes they will use to conduct hearings for debtor appeals. Consequently, it is not presently clear when Agriculture will be able to fully incorporate AWG into its debt collection processes. FMS has been working with its private collection agency contractors to incorporate AWG into its cross-servicing program. Although FMS’ incorporation of AWG into the cross-servicing program would undoubtedly improve collection success and make the FMS collection program more comprehensive, certain factors could limit its use. An important consideration is that much of the delinquent debt reported by agencies as eligible for cross-servicing is not currently being promptly referred to FMS. For example, the four agencies we surveyed that plan to rely exclusively on FMS for AWG implementation, including Agriculture, together reported having referred only $288 million of about $690 million of all types of debt that were reported as eligible for cross-servicing as of September 30, 2000.
The testimony discusses debt collection efforts by two major components at the Department of Agriculture--the Rural Housing Service (RHS) and the Farm Service Agency (FSA). The Debt Collection Improvement Act of 1996 requires agencies to (1) notify the Department of the Treasury of debts more than 180 days delinquent for the purposes of administrative offset against any amounts that might otherwise be due and (2) refer such debts to Treasury for centralized collection. To facilitate collection, agencies can administratively garnish the wages of delinquent debtors throughout government. GAO found that agencies are excluding most reported debt more than 180 days delinquent from referral requirements. To more fully realize the benefits of debt collection, agencies need to improve their implementation of the act. The Financial Management Service is making steady progress in collecting delinquent federal non-tax debt through the Treasury Offset Program--a mandatory governmentwide debt collection program that compares delinquent debtor debt to federal payment data. Agriculture and other agencies still have not used administrative wage garnishment to collect delinquent non-tax debt even though experts have testified that it can be an extremely powerful tool for debt collection. If the government is going to make significant progress in collecting the billion of dollars its owned in delinquent non-tax debt, federal agencies have to make implementation of the act's debt collection provisions a top priority.
FAA guidance prescribes an annual inspection to cover all aspects of a repair station’s operations, including the currency of technical data, facilities, calibration of special tooling and equipment, and inspection procedures, as well as to ensure that the repair station is performing only the work that it has approval to do. Most FAA offices assign an individual inspector to conduct routine surveillance at a repair station, even one that is large and complex, rather than using a team of inspectors. Most inspectors are responsible for oversight at more than one repair station. At the FAA offices we visited, we examined the workloads of 98 inspectors and found that, on average, they were responsible for 12 repair stations each, although their individual workloads varied from 1 to 42 facilities of varying size and complexity. The inspectors assigned responsibility for repair stations are also assigned oversight of other aviation activities such as air taxis, agricultural operators, helicopter operators, and training schools for pilots and mechanics. FAA uses teams for more comprehensive reviews of a few repair stations through its National Aviation Safety Inspection Program or its Regional Aviation Safety Inspection Program. These special, in-depth inspections are conducted at only a small portion of repair stations. In the past 4 years, an average of only 23 of these inspections have been conducted annually at repair stations (less than 1 percent of the repair stations performing work for air carriers). From fiscal year 1993 through 1996, we found 16 repair stations that were inspected by a single inspector and were also inspected by a special team of inspectors during the same year. The teams found a total of 347 deficiencies, only 15 of which had been identified by individual inspectors. Many of the deficiencies the teams identified were systemic and apparently long-standing, such as inadequate training programs or poor quality control manuals. Such deficiencies were likely to have been present when the repair stations were inspected earlier by individual inspectors. We believe that there are several reasons why team inspections identify a higher proportion of the deficiencies that may exist in the operation of large repair stations. First, many FAA inspectors responsible for conducting individual inspections said that, because they have many competing demands on their time, their inspections of repair stations may not be as thorough as they would like. Second, team inspections make use of checklists or other job aids to ensure that all points are covered. Although FAA’s guidance requires inspectors to address all aspects of repair stations’ operations during routine surveillance, it does not prescribe any checklist or other means for assuring that all items are covered. The lack of a standardized approach for routine surveillance increases the possibility that items will not be covered. Finally, inspectors believe team inspections help ensure that their judgments are independent because most team members have no ongoing relationship with the repair station. By contrast, individual-inspector reviews are conducted by personnel who have a continuing regulatory responsibility for the facilities and, therefore, a continuing working relationship with the repair station operator. A substantial number of the inspectors we surveyed supported the use of team inspections. We found that 71 percent of the inspectors responding favored team inspections using district office staff as a means to improve compliance, and 50 percent favored an increase in National or Regional Aviation Safety Inspection Program inspections staffed from other FAA offices. We also found that some district offices had already begun using locally based teams to perform routine surveillance of large and complex repair stations. Thus, in our October 1997 report, we recommended that FAA expand the use of locally based teams for repair station inspections, particularly for those repair stations that are large or complex. FAA’s guidance is limited in specifying what documents pertaining to inspections and follow-up need to be maintained. We examined records of 172 instances in which FAA sent deficiency letters to domestic repair stations to determine if follow-up documentation was present. However, responses from the repair stations were not on file in about one-fourth of these instances, and FAA’s assessments of the adequacy of the corrective actions taken by the repair stations were not on file in about three-fourths of the instances. We also examined inspection results reported in FAA’s Program Tracking and Reporting Subsystem, a computerized reporting system, and found it to be less complete than individual files on repair stations. Without better documentation, FAA cannot readily determine how quickly and thoroughly repair stations are complying with regulations. Just as important, FAA cannot identify trends on repair station performance in order to make informed decisions on how best to apply its inspection resources to those areas posing the greatest risk to aviation safety. FAA is spending more than $30 million to develop a system called the Safety Performance Analysis System, whose intent is to help the agency identify safety-related risks and establish priorities for its inspections. It relies in part on the current reporting subsystem, which contains the results of safety inspections. However, this system will not be fully implemented until late 1999, and it will be of limited use if the documentation on which it is based is inaccurate, incomplete, or outdated. We also found that FAA’s documentation of inspections and follow-up was better in its files for foreign repair stations than for domestic repair stations, perhaps in part because under agency regulations, foreign repair stations must renew their certification every 2 years. By comparison, domestic repair stations retain their certification indefinitely unless they surrender it or FAA suspends or revokes it. Foreign repair stations appear to be correcting their deficiencies quickly so that they qualify for certificate renewal. The 34 FAA inspectors that we interviewed who had conducted inspections of both foreign and domestic repair stations were unanimous in concluding that compliance occurred more quickly at foreign facilities. They attributed the quicker compliance to the renewal requirement and said that it allowed them to spend less time on follow-up, freeing them for other surveillance work. However, we were unable to confirm whether foreign repair stations achieve compliance more quickly than domestic repair stations do, because of the poor documentation in domestic repair station files. To address these problems, we recommended that FAA specify what documentation should be maintained in its files to record complete inspection results and follow-up actions, and that FAA monitor the implementation of its strategy for improving the quality of data in its new management information system. FAA concurred with these recommendations and has reported actions underway to implement them. FAA has several efforts under way that may hold potential for improving its inspections of repair stations. Two efforts involve initiatives to change the regulations covering repair station operations and the certification requirements for mechanics and repairmen. FAA acknowledges that the existing regulations do not reflect many of the technological changes that have occurred in the aviation industry in recent years. The FAA inspectors we surveyed strongly supported a comprehensive update of repair station regulations as a way to improve repair stations’ compliance. Of the inspectors we surveyed, 88 percent favored updating the regulations. This update, begun in 1989, has been repeatedly delayed and still remains in process. The most recent target—to have draft regulations for comment published in the Federal Register during the summer of 1997—was not met. Similarly, the update of the certification requirements for maintenance personnel has been suspended since 1994. Because of these long-standing delays, completion of both updates may require additional attention on management’s part to help keep both efforts on track. Our October 1997 report recommended that FAA expedite efforts to update regulations pertaining to repair stations and establish and meet schedules for completing the updates. A third effort involves increasing and training FAA’s inspection resources. Since fiscal year 1995, FAA has been in the process of adding more than 700 inspectors to its workforce who will, in part, oversee repair stations. Survey responses from current inspectors indicated that the success of this effort will depend partly on the qualifications of the new inspectors and on the training available to all of those in the inspector ranks. Specifically, 82 percent of the inspectors we surveyed said that they strongly or generally favored providing inspectors with maintenance and avionics training, including hands-on training as a way to improve repair stations’ compliance with regulations. Another effort is FAA’s new Air Transportation Oversight System. This system is intended to respond to problems in FAA’s oversight that have been pointed out in recent years by GAO, the Department of Transportation’s Inspector General, FAA’s 90 Day Safety Review, and others. The goal of this new system is to target surveillance to deal with risks identified through more systematic inspections. Phase I of the system is expected to be implemented in the fall. When fully implemented, this system will offer promise of significant improvements in the way FAA conducts and tracks all of its inspections, including those performed at repair stations. However, in its initial phase, the system will affect the oversight of only the 10 largest air carriers and may not be fully applied to repair stations for several years. We will continue to monitor FAA’s progress in improving the effectiveness of its oversight in this important area. Mr. Chairman, this concludes our statement. We would be pleased to respond to questions at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Federal Aviation Administration's (FAA) oversight of repair stations that maintain and repair aircraft and aircraft components, focusing on: (1) the practice of using individual inspectors in repair station inspections; (2) the condition of inspection documentation; and (3) current FAA actions to improve the inspection process. GAO noted that: (1) FAA was meeting its goal of inspecting every repair station at least once a year and 84 percent of the inspectors believed that the overall compliance of repair stations was good or excellent; (2) more than half of the inspectors said that there were areas of compliance that repair stations could improve, such as ensuring that their personnel receive training from all airlines for which they perform work and have current maintenance manuals; (3) while FAA typically relies on individual inspectors, the use of teams of inspectors, particularly at large or complex repair stations, may be more effective at identifying problems and are more liable to uncover systemic and long-standing deficiencies; (4) because of insufficient documentation, GAO was unable to determine how well FAA followed up to ensure that the deficiencies found during the inspections were corrected; (5) GAO was not able to assess how completely or quickly repair stations were bringing themselves into compliance; (6) because FAA does not tell its inspectors what documentation to keep, the agency's ability to identify and react to trends is hampered; (7) FAA is spending more than $30 million to develop a reporting system that, among other things, is designed to enable the agency to apply its inspection resources to address those areas that pose the greatest risk to aviation safety; (8) as GAO has reported in the past, this goal will not be achieved without significant improvements in the completeness of inspection records; (9) since the May 1996 crash of a ValuJet DC-9 in the Florida Everglades, FAA has announced new initiatives to upgrade the oversight of repair stations; (10) these initiatives were directed at clarifying and augmenting air carriers' oversight of repair stations, not at ways in which FAA's own inspection resources could be better utilized; (11) FAA has several other efforts under way that would have a more direct bearing on its own inspection activities at repair stations; (12) one effort would revise the regulations governing operations at repair stations, and another would revise the regulations governing the qualifications of repair station personnel; (13) the revision of the regulations began in 1989 and has been repeatedly delayed; (14) the third effort is the addition of more FAA inspectors, which should mean that more resources can be devoted to inspecting repair stations; (15) FAA has recently announced a major overhaul of its entire inspection process; and (16) it is designed to systematize the process and ensure consistency in inspections and in reporting the results of these inspections so as to allow more efficient targeting of inspection resources.
The Medicare program utilizes a variety of payment methods to reimburse providers for the services it covers. Hospital insurance, or Part A, covers inpatient hospital, skilled nursing facility, and hospice care, and certain home health services. Supplemental medical insurance, or Part B, covers physician and outpatient hospital services, diagnostic tests, and other medical services and supplies. Depending on the service, HCFA pays Part A providers based on their costs, or on a prospective payment basis designed to cover the cost of providing a bundle of services related to a particular medical condition. In contrast, Part B providers are typically paid for each individual service, based on a fee schedule. To ensure that only services the statute specifies are covered, Medicare has extensive policies and rules about what constitutes covered services. It is not surprising, therefore, that providers are sometimes overpaid for their services. Medicare claims are paid by a network of private health insurance companies hired by HCFA, such as Blue Cross and Blue Shield plans, Mutual of Omaha, and CIGNA. Contractors that process Part A claims are referred to as fiscal intermediaries, while those that process Part B claims are called carriers. In fiscal year 1999, fiscal intermediaries processed about 133 million claims representing $124 billion in payments, while carriers processed about 721 million claims and paid benefits of about $43 billion. Fiscal intermediaries and carriers also perform activities related to safeguarding Medicare payments. These include using prepayment computer edits to prevent potential overpayments, conducting postpayment medical reviews, determining whether other insurers should pay before Medicare, and auditing reports of providers’ costs to determine if any costs are overstated or not allowed by Medicare. HCFA uses specialized contractors to supplement these program safeguard efforts. Claims administration contractors have principal responsibility for claims processing and administration. Specifically, they contract with HCFA to (1) receive claims; (2) judge their appropriateness; (3) pay appropriate claims promptly; (4) identify potentially incorrect or fraudulent claims or fraudulent providers, and withhold payment if justified; and (5) identify and recover overpayments. The contractors are expected to manage Medicare’s funds in a fiscally responsible way, effectively address provider and beneficiary inquiries, and establish a process for handling provider and beneficiary appeals of claims decisions. Until the HIPAA of 1996 established the Medicare Integrity Program (MIP), only claims administration contractors performed program safeguard activities. Under HIPAA, HCFA has the authority to enter into contracts with a variety of firms not just insurance companies safeguard-related functions or undertake specific program initiatives to promote Medicare’s integrity. For example, one such initiative is aimed at the early detection of fraudulent and abusive billing in three Midwestern states. Under MIP, HCFA has contracted with 12 firms known as program safeguard contractors (PSCs) that compete among themselves to perform program safeguard work detailed in individual task orders. These MIP contractors can subcontract with other companies that may have special expertise to help them perform particular task orders. As of June 2000, HCFA had issued a total of 10 task orders—each for a defined set of program safeguard services provided over a specified time period. In addition to providing HCFA with new contracting authority, HIPAA provided HCFA with predictable increases in program safeguard funding. Before HIPAA, contractor program safeguard activities were funded from contractors’ general program management budgets, which also covered contractors’ costs for processing claims. This meant that program safeguard activities had to compete for funding with other contractor activities that might have higher priority. For example, funding system improvements to ensure that claims were paid promptly might require a shift in funding allocations, as would emergencies or new HCFA initiatives. By contrast, HIPAA provides HCFA with assured funding levels for program safeguard activities. Program safeguard expenditures totaled $438 million the first year of MIP increased funding in each subsequent year through fiscal year 2003, when the program safeguard appropriation is expected to total $720 million. Most of HCFA’s accounts receivable are the result of overpayments that have yet to be recovered from providers. Overpayments to providers result from a variety of inadvertent errors or intentional misrepresentations. The following are the four main categories of Medicare overpayments: Coverage, medical necessity, or documentation issues Medicare should pay claims only for services that are medically necessary, meet Medicare coverage requirements, and are properly documented to indicate that the service took place as reflected in the claim. Claims can appear to be correct even though they do not meet these conditions. For example, a home health agency may receive reimbursement for services on behalf of a beneficiary who did not meet the program requirements of being homebound. Provider billing errors Providers make manual or automated billing errors, either of which can lead to overpayments. For example, if a physician’s billing clerk enters the wrong procedure code on a claim, the physician may receive a larger reimbursement for a more comprehensive office visit than the visit that actually occurred. Cost reporting errors For services paid on the basis of provider costs, providers receive interim payments based on their projected allowable costs. If costs are later disallowed, providers will have to return these overpayments. Medicare secondary payer (MSP) debt These debts occur when Medicare pays for a service that subsequently is determined to be the responsibility of another payer.These include certain cases in which beneficiaries (1) have other health insurance coverage provided by their employer or their spouse’s employer (2) have occupational injuries or illnesses that would be covered by workers’ compensation (3) have injuries which are covered by liability insurance or a settlement arising from an accident; or (4) are receiving care for end-stage renal disease during the first 30 months of their treatment and have other health insurance coverage. Recovery auditing has been used in various industries, including health care, to identify and collect overpayments for about the last 30 years. Recovery audit firms that specialize in health issues contract with private insurance companies, state Medicaid agencies, managed care plans, and employee group health plans. In some cases, clients rely exclusively on recovery audit firms to identify and collect overpayments. In other cases, they supplement their own internal capabilities with the services of recovery audit firms. These firms generally focus on overpayment identification but will also collect identified overpayments. Typically, they are paid a contingency fee based on a percentage of the overpayments that they or their clients collect. Fees vary depending on such factors as the type of overpayment involved and the degree of difficulty associated with identifying and collecting it. Recovery audit firms employ a variety of techniques to identify overpayments. For example, one firm may use proprietary software to analyze a large database of claims to identify potential overpayments while another may have specially trained staff review medical records for potentially inappropriately billed services. Recovery audit firms may also collect overpayments through such means as issuing demand letters to providers or negotiating an amount to be returned to the client. In the last several years, HCFA has increased its emphasis on prepayment activities that help contractors avoid payment errors. Correct payment involves several prepayment processing steps to determine whether the claim is for covered, medically necessary and reasonable services; is provided to an eligible beneficiary; and contains a valid provider number. All claims go through a variety of computerized prepayment edits designed to ensure they are correct on their face for example, to ensure that they are not duplicate payments. In addition, many claims go through prepayment medical reviews that are either automated or performed by contractor staff. However, the tremendous volume of Medicare claims processed by each contractor makes it impractical to manually review more than a small fraction of claims prior to payment. For example, in fiscal year 1998, only about 1 in 8 claims were medically reviewed prior to payment and only about 1 in 16 underwent any type of manual prepayment review. As a result, adequate postpayment review is critical to ensuring that overpayments are identified in a timely way. In conducting postpayment reviews to identify potential overpayments, contractors primarily focus on three program safeguard activities postpayment medical review, reviews and audits of cost-based reimbursement payments, and MSP reviews. These activities are described below: Postpayment medical review. Overpayments due to claims that are medically unnecessary, insufficiently documented, or for noncovered services and that were not identified through prepayment edits must generally be discovered through contractor postpayment medical review. By reviewing paid claims data, contractors identify patterns that could indicate potential abuse. For example, contractor staff may review billing patterns for certain procedure codes and find unusual increases in utilization over time or significant utilization differences among providers. Once a potential problem is identified, contractors typically sample a provider’s claims and request documentation from the provider for selected claims to determine if they were paid properly or if overpayments occurred. Peer review organizations (PROs) are independent physician organizations located in each state that work under contract to HCFA. They conduct postpayment medical review for Medicare claims involving inpatient services.PROs are primarily responsible for ensuring that care provided Medicare beneficiaries is medically necessary, reasonable, is provided in an appropriate setting, and meets professionally accepted standards of quality. If, in the course of their reviews, the PROs discover overpayments, they are supposed to refer the cases to the appropriate fiscal intermediary for payment adjustment. Interim rate reviews and cost report audits. These activities are performed by fiscal intermediaries to determine whether the payments made to providers paid on the basis of their costs accurately reflect their allowable costs. Interim rate reviews allow contractors to adjust payment rates during the fiscal year after comparing a provider’s interim payment rates with its previous cost information, its Medicare payments, and its audit history. These reviews may result in revisions to providers’ interim payment rates for the remainder of the year if it is found that the provider was being overpaid or underpaid. Once a provider files its year-end cost report, it is reviewed and may be audited to determine whether there are overpayments or underpayments relating to the costs claimed by the provider. MSP reviews. HCFA officials estimate that about 8 percent of beneficiaries have medical claims that are potentially the responsibility of another health insurer, liability insurer, or workers’ compensation program. MSP reviews seek to identify such primary sources of payment and recoup any primary payments made by the Medicare program. HCFA and its contractors use a variety of techniques to find MSP cases. Contractors send a questionnaire to each new beneficiary 3 months prior to their entitlement to Medicare benefits to determine if the beneficiary or spouse is employed and has group health insurance coverage. Medicare beneficiary information is also matched periodically with Social Security Administration (SSA) and Internal Revenue Service (IRS) data on employment status and earnings. If beneficiaries are identified as employed and earning less than $10,000 per year, contractors may elect not to send questionnaires to employers asking about the beneficiary’s employment and health insurance coverage. In addition, some private insurance companies have agreed (voluntarily or as part of a legal settlement with HCFA) to share information on their policyholders. Finally, if a claim is submitted with certain diagnosis codes indicating traumatic or work- related injury, contractors automatically send the beneficiary a letter requesting information about a potential lawsuit, automobile liability insurance, or workers’ compensation coverage. In a recent report, the HHS OIG took steps to identify beneficiaries who had other primary sources of coverage and concluded that HCFA’s current MSP questionnaire and data match activities successfully identified most of the beneficiaries with other coverage that the OIG was able to identify. However, it estimated that $56 million had been paid out improperly in 1997 to certain beneficiaries who had other insurance that had not been identified by HCFA’s questionnaire and data match activities. HCFA has contracted with a coordination-of-benefits contractor under MIP to consolidate and improve many of the MSP functions currently performed by the claims administration contractors. In addition, several claims administration contractors have taken lead roles in identifying potential MSP recoveries from nationwide class action lawsuits, such as the product liability case involving breast implants. Providers themselves are a major source of information on overpayments. Most providers make an honest effort to bill Medicare correctly, but when errors are discovered through their own internal reviews, our work showed that many providers notify their contractor. Often they send payment along with a corrected claim, so the contractor learns of the overpayment at the same time it is recovered. HCFA currently has limited information available to measure how effective the contractors are in identifying Medicare overpayments. The HHS OIG’s annual estimate of improper payments within the Medicare fee-for-service program provides some indication of national error rates, but is not designed to measure individual contractor performance. The HHS OIG’s analysis of a sample of Medicare claims for fiscal year 1999 estimates that improper payments totaled about $13.5 billion, or about 8 percent of all Medicare fee-for-service payments.The main reasons for improper payments were insufficient documentation to support a claim and lack of medical necessity for a service or procedure. While the HHS OIG estimates improper payments at the national level, the sample size does not allow HCFA to draw conclusions on contractor-specific performance. The OIG’s analysis also does not take into account that an overpayment may have been identified and recovered by the contractor during its postpayment review activities. HCFA is developing the Comprehensive Error Rate Testing program to better evaluate individual contractor performance. When this program is implemented, an independent firm will periodically review a random sample of claims to determine if the contractors’ payment decisions were appropriate. HCFA officials expect that this program will enable the agency to develop error rates specific to each contractor and for different types of benefits and providers. While the program will provide HCFA with additional data on contractor overpayment error rates, it is being designed as a management tool to identify problem areas. It will not identify specific claims beyond claims in the sample that were paid in error during the covered time period. The program will be implemented this fiscal year, beginning with the contractors that process and pay claims for durable medical equipment and supplies, and will include all claims administration contractors by the end of fiscal year 2002. It is too early to determine how the information generated by this program will be used to improve contractor effectiveness by restructuring overpayment identification methods. We found that the techniques used by recovery auditors were similar to those already employed by HCFA’s contractors in their postpayment review, MSP, and other program safeguard activities. While the techniques are similar, the specific application—such as what factors trigger a more affect how well overpayments are identified. The recovery auditing techniques most applicable to the Medicare program data mining, diagnosis-related group (DRG) validation, cost report audits, and third-party liability reviewsare part of current postpayment review activities. HCFA’s decision to concentrate its program safeguard resources on prepayment, rather than postpayment, activities in recent years is justified given the cost-effectiveness of error prevention. However, the result is that postpayment review activities have been reduced for some types of claims: only about 565,000 claims were subject to postpayment medical review in fiscal year 1998, compared to approximately 960,000 claims in fiscal year 1995 a drop of over 40 percent. HCFA may be missing opportunities to identify significant overpayments through postpayment activities. However, any increase in these efforts would likely require additional program safeguard funding to ensure that prepayment reviews are not decreased. Investment in evaluation of the most cost-effective postpayment review activities for identifying overpayments would be worthwhile. HCFA has limited ability to do this kind of evaluation now because it cannot measure the effectiveness of each contractor’s program safeguard activities by type of activity. The large number of computerized claims processed by Medicare lends itself to the application of data mining techniques. Data mining involves specialized software programs that analyze large volumes of claims data to identify potential overpayments. The programs typically contain specific algorithms used to identify billing errors and abusive practices, and are based on the insurer’s policies, procedures, and contractual arrangements, as well as common sense. HCFA’s claims administration contractors currently use data mining and statistical analysis as part of their postpayment review activities. Since 1993, HCFA also has contracted with a specialized statistical analysis contractor to perform large-scale analysis of durable medical equipment claims. Data mining can identify many potentially inappropriate payments, but determining which ones are actual overpayments takes additional investigation. Currently, the contractors only have the resources to investigate situations in which the data indicate potential large-scale abusive practices. Several of HCFA’s program safeguard contractors also specialize in data mining and the manipulation of large data sets. For example, one program safeguard contractor is preparing algorithms and analyzing national data to identify potential fraud that occurred during the critical months leading to the year 2000. Another program safeguard contractor is conducting data mining activities to support development of medical policies and the early detection of fraudulent and abusive billing in three Midwestern states. Recovery auditors also use data mining to identify overpayments for their clients. For example, in 1999 a recovery auditor under contract with a state Medicaid program subjected 3 years of paid claims to its data mining edits and identified $52 million in overpayments. These overpayments were approved for collection by an independent state review board. Another recovery auditor found, through its data mining efforts, that a state Medicaid agency was paying 10 times that amount allowed for an asthma inhaler because providers were billing based on the drug’s unit dosage which represents part of a gram rather than by the gram. Medicare’s payments for hospital inpatient services are determined on the basis of the beneficiary’s diagnosis. These diagnoses are grouped for payment, with each DRG designed to reflect the bundle of services and supplies required to treat different medical conditions. Overpayments can result if a DRG reflects a more serious—and expensive—condition than the beneficiary actually had. Some recovery auditors validate DRGs for their clients, such as private insurers who have adopted Medicare’s DRG coding system for inpatient claims. According to the representatives of one PRO, while DRG validation was an area of emphasis for PROs in the 1980s and early 1990s, this activity was not a high priority in recent years for the PROs. The PROs, rather than the claims administration contractors, review hospital inpatient DRG-based claims. DRG validation involves verification that a provider classified a patient within the DRG code that accurately reflects the patient’s condition as described in the discharge information. According to recovery auditors, bills are sometimes miscoded because providers base their codes on the patient’s medical complaints, rather than on the physician’s diagnosis. DRG validation should be based on a patient’s principal diagnosis and procedure code information contained in the medical record. The validation is done by staff who have been trained in applying medical coding terminology to medical records information; clinical judgment is not necessarily required. HCFA’s Payment Error Prevention Program, which all PROs must undertake, recently has increased the priority they must give to reviewing hospital claims for billing accuracy as well as quality of care.However, two contractors we visited reported that they rarely, if ever, receive reports from the PROs on overpayments that the PROs have identified. Review of provider cost report financial information is a cost-effective way to identify overpayments. In 1999, cost report reviews and audits of providers not covered by a prospective payment system resulted in disallowing $2.7 billion, or about 10 percent of the total costs claimed by providers. However, as prospective payment systems replace cost-based reimbursement, fewer overpayments of this type will occur.We found that the two intermediaries we visited use cost report audits and rate reviews as the primary means of identifying overpayments for Part A providers. One contractor representative estimated a return of $13 saved for every dollar spent conducting the audits. Although the number of cost reports audited between 1995 and 1998 has increased, a large percentage of cost reports are still never audited by HCFA’s contractors. For example, in fiscal year 2000, HCFA expects contractors to audit only about 12 percent of the cost reports submitted by home health agencies and 25 percent of those submitted by single-facility hospitals. It can take up to 2 years after the end of the provider’s fiscal year to reach a final settlement on the provider’s costs that are allowable by Medicare. So, even though HCFA is changing its payment methods, cost report audits and rate reviews will continue to be important program safeguards for several years. HCFA’s financial auditors estimated that if all cost reports submitted by providers not under prospective payment had been fully audited, HCFA might have been able to identify an additional $600 million in fiscal year 1999 overpayments. It is not necessary to perform a complete audit to identify overpayments. For example, contractors can conduct focused reviews that examine only certain aspects of the cost report. HCFA has encouraged intermediaries to concentrate on these focused reviews. This allows the intermediaries to stretch their audit resources by concentrating on areas yielding the most return, and increases the number of providers whose records are reviewed. Some fiscal intermediaries have contracted with private firms to augment their cost report audit efforts, albeit with mixed results. For example, one intermediary we visited told us that these firms require substantial up-front training on Medicare’s rules and generally had a rate of return lower than with the intermediary’s own internal auditors. Some recovery auditing firms specialize in focused reviews of provider financial records, such as credit balance audits. HCFA requires its institutional providers to submit quarterly credit balance reports identifying whether the provider owes Medicare money. However, although these reports are required, HCFA’s intermediaries do not routinely conduct credit balance audits outside the context of a full cost report audit. Credit balance audits involve an on-site review of accounting and medical records, and tracing transactions through the accounting system to identify cases in which a provider has been overpaid and not returned the money to the insurer. HCFA is currently developing a statement of work for a contractor to evaluate (1) credit balance reporting policies, procedures, and practices in place at selected Medicare claims administration contractors and (2) HCFA’s oversight of those contractors’ efforts, so that the contractor can recommend improvements. HCFA’s contractors conduct postpayment MSP reviews by attempting to identify possible alternative sources of health insurance coverage with primary payment responsibility. Contractors’ ability to identify MSP debt is hampered by private insurance companies’ and employer group health plans’ unwillingness to share information about their enrollees with HCFA. GAO has long recognized that private insurance companies and employers are in the best position to routinely identify policyholders and employees who might be eligible for Medicare. Some recovery auditors have developed proprietary databases that contain insurance company enrollment information and other data that could potentially help Medicare identify beneficiaries with other health insurance. However, HCFA might not be able to access this information if the insurance companies involved were unwilling to share beneficiary data. One recovery audit firm we spoke with that specializes in third-party liability performs its work for 20 state Medicaid agencies. This organization maintains a database containing Medicaid recipient enrollment data along with enrollment data from commercial health insurance plans, Medicare contractors, and Blue Cross and Blue Shield plans. The firm conducts many different types of data matches that involve multiple, successively applied matches, and augments its data match techniques by reviewing employer wage files, credit bureau information, Department of Motor Vehicles data, state vital statistics files and property records to identify possible casualty, tort, and estate sources of payment. As previously mentioned, HCFA matches Medicare data with employment and earnings data maintained by the IRS and the SSA to identify beneficiaries who may have health insurance through their employer or their spouse’s employer. However, there can be a 2-year time lag between when a beneficiary or spouse is employed and when contractors can confirm the information about employment. Information on employment is reported to IRS after the fact. IRS must then prepare the employment information, which is made available for matching with Medicare beneficiary data. Contractors then confirm it by querying employers. As a result, even with current data match activities, Medicare can have paid for claims long before another liable insurer is identified. Even more time will pass before any funds can be recouped. Commercial insurers share information with each other on their beneficiaries to determine which beneficiaries have more than one source of insurance. It is advantageous for companies to share this information because, for some beneficiaries, an insurer will be the secondary payer. If firms do not try to coordinate their benefit payments, they may both pay as the primary payer. Better access to health insurers’ beneficiary data could help HCFA identify MSP cases by providing more current data. However, because Medicare is generally the secondary payer to other insurers, it may not be to other insurers’ advantage to share beneficiary data with HCFA. At present, insurers are under no obligation to inform HCFA that some of their policyholders are Medicare beneficiaries unless there is a court settlement requiring such data sharing. HCFA has had to pursue certain insurance companies—some with related corporations that are Medicare contractors—in federal civil court for refusing to pay before Medicare when the government contends that Medicare should have been the secondary payer. From 1995 to 1999, HCFA reached settlements that totaled almost $66 million in cases in which a related company was a Medicare carrier or intermediary, including the national Blue Cross Blue Shield Association, Blue Cross Blue Shield of Florida, Blue Cross Blue Shield of Massachusetts, Blue Cross Blue Shield of Michigan, Transamerica, and Travelers. As a result of these legal settlements, some major insurers have agreed to share data, but some of these settlements are about to expire. While insurers can voluntarily share data on their policyholders with HCFA, few have opted to participate, thereby reducing HCFA’s ability to identify claims that are the responsibility of another insurer. If private insurers were required to share information on their policyholders, HCFA could more easily determine which Medicare beneficiaries had other health insurance. In the late 1980s, HCFA proposed but did not obtain legislation granting it access to private insurers’ policyholder data. Insurer reporting provisions were included in the President’s budget proposals for both fiscal years 2000 and 2001, but these provisions were not accepted by the Congress. Without such a requirement, the recovery auditing firm we visited told us that access to some of the private insurer and Medicaid data they used would have to be negotiated with each of the participating insurers before they could be used for Medicare. While Medicare would be likely to benefit from additional efforts to identify overpayments, further efforts may require increased funding. As it has sought to run the program economically, HCFA has been left with fewer and fewer dollars to pay for administering a program that has grown in volume and whose management involves increasingly complex tasks. The Congress has recognized the importance of ensuring that Medicare have adequate program safeguards in place and has developed an assured funding stream for these activities through MIP. For fiscal year 2000, $630 million was appropriated for MIP. The amounts appropriated will grow until fiscal year 2003, when funding will reach $720 million. However, funding dedicated to program safeguards in fiscal year 2000 is still about one-third less (on a per-claim basis) than it was in 1989. Based on estimates of the growth of trust fund expenditures, our analysis indicates that MIP funding, as a percentage of program dollars, will be less in 2003 than it is today. It will still represent little more than one-quarter of 1 percent of Medicare program expenditures. With this funding, HCFA and its contractors carry out a range of program safeguard activities, with an emphasis on prepayment reviews designed to prevent overpayments. If HCFA were to increase its overpayment identification activities, it likely would have to do it by curtailing other types of program safeguard activities, such as these prepayment reviews. HCFA has estimated significant returns from both its prepayment and postpayment MIP activities. While it is difficult to isolate the dollar savings attributable to a single year’s funding, based on HCFA estimates for fiscal year 1999, MIP saved the Medicare program more than $17 for each dollar spent—about 55 percent from prepayment activities and the rest from postpayment activities. Additional investment seems likely to yield additional positive returns. However, it is important that both current funding and any additional investment be spent as effectively as possible. In addition, investments in these activities should not be expanded beyond levels likely to yield positive returns. Additional information on the relative effectiveness of these activities could help guide HCFA in its allocation of funds, but precisely measuring the effect of MIP funding efforts is difficult. Savings realized today may result from activities begun several years ago. For example, postpayment activities such as medical review can be used to identify program vulnerabilities that can be addressed in the future through automated prepayment edits and manual reviews. Therefore, while their immediate return on investment can be relatively low, such postpayment activities may lead to future prepayment savings. Reliable information on the relative value of specific program safeguards could help HCFA target its program integrity efforts. However, at present, HCFA lacks the detailed data that can provide the best estimates of returns from specific program safeguard activities. For example, HCFA does not have information on whether automated or manual prepayment medical reviews generate the most savings. Similarly, HCFA does not know which contractors are realizing the highest return on investment from their program safeguard activities. To remedy this information gap, HCFA will implement a new Program Integrity Management Reporting system for carriers and intermediaries in fiscal year 2001. This system will be used to collect information that HCFA expects will allow it to report savings and provide details by contractor, program activity, and provider type. The system will generate monthly reports, which should allow HCFA to more closely monitor and direct their program integrity activities. HCFA plans to audit the information input into the system to verify its accuracy. The results of our analysis of HCFA’s accounts receivable, shown in tables 1 and 2, indicate that the amount of Medicare’s identified overpayments collected increased between fiscal years 1998 and 1999, both in total and separately for parts A and B. Specifically, collections rose from $7.5 billion a 16 percent increase. in fiscal year 1998 to $8.7 billion in fiscal year 1999 At the same time, however, the value of the new accounts receivable identified increased from $10.1 billion to $12.6 billion over the 2 years, or by 25 percent. The difference between the collection and overpayment identification rates resulted in a fiscal year 1999 ending accounts receivable balance of $7.3 billion, as over $3 billion of uncollectable accounts receivable were written off by HCFA under a special initiative to remove old debt from HCFA’s accounts receivable. Further, although not shown in the table, a large percentage of the ending accounts receivable balance each year was more than 6 months delinquent—40 percent in fiscal year 1998 and 45 percent in fiscal year 1999.HCFA’s claims administration contractors are responsible for nearly all of the collections. Contractors are generally able to collect many overpayments immediately, by offsetting them against current payments due. However, contractors’ ability to collect Medicare overpayments is affected by a number of different factors, including the type of overpayment, the promptness with which the overpayment was identified, and whether the provider is still in business and participating in Medicare. These factors are discussed below. Type of overpayment. We found that contractors were much more successful in collecting overpayments identified in cost report audits and medical reviews than those identified through MSP activities. In fiscal year 1998, for example, less than 10 percent of MSP receivables nationally were collected, versus about 62 percent of non-MSP receivables. This low rate of MSP collections is not surprising for several reasons. First, contractors are only required to send one demand letter to the responsible party requesting payment of MSP overpayments, whereas up to three demand letters are sent on other types of receivables. Second, other insurers often dispute that they are responsible for payment. Third, MSP collection rates are also affected by potential conflict of interest or lack of diligence: the claims administration contractors may themselves be responsible for payment and may not be quick to collect on those receivables. For example, in 1999, HCFA hired a private accounting firm to review Medicare accounts receivable at 15 contractors. At one contractor, the firm identified 91 MSP overpayments more than 6 months old totaling $290,000 that the contractor’s private operations owed Medicare but had not yet repaid. Prompt collection of overpayments and provider participation in Medicare. Contractors have not always followed procedures that require prompt efforts to collect overpayments they identify. This affects their ability to collect what becomes aging debt. For example, home health agencies (HHA) are required to submit annual reports of their Medicare costs to the fiscal intermediaries. All cost reports then go through a settlement process that may identify overpayments.At one contractor, we identified several cases in which the contractor did not conduct timely cost report settlements. The contractor was 2 years late in settling one HHA’s cost report and, in another case, a cost report settlement that should have occurred in 1996 did not take place until late 1999. Collections are even more difficult when providers terminate their Medicare participation. For example, in a recent GAO report regarding overpayments due from 15 HHAs in Texas that closed between October 1997 and July 1999, we found that HCFA had collected $5.3 million, or about 7 percent of the $73 million due from the closed agencies. Legal proceedings. When a provider has filed for bankruptcy, the contractor’s collection activities are subject to review and approval of the bankruptcy court. Whether any of the bankrupt provider’s Medicare overpayments are eventually collected depends on the results of the bankruptcy proceedings. Contractors’ collection abilities are also affected when providers who owe the program money are under investigation by the HHS OIG or involved in litigation with the Department of Justice. In such cases, contractors must suspend their collection efforts until resolution occurs. A substantial amount of overpayments cannot be collected because the debtors are involved in bankruptcy proceedings or litigation. According to HCFA, $845 million in overpayments cannot be collected because they are protected under bankruptcy proceedings. In addition, $147 million cannot be collected because of litigation. Debt that the contractors cannot collect is referred to HCFA for collection. However, we found that contractors do not always refer debt in a timely way. Even when debt is referred appropriately, we found that the age and type of debt HCFA receives makes it difficult to collect. In addition, HCFA’s recording and tracking systems are unreliable, further complicating collection efforts. HCFA’s low rate of collection for transferred debt is attributed in large part to the age and type of debt HCFA receives. HCFA regional office staff informed us that this debt is difficult to collect due to such factors as provider bankruptcy or closure. In addition, it is generally recognized by HCFA and contractor representatives that the longer an overpayment is outstanding, the less likely it is that it will be collected. At one of the intermediaries in our study, accounts receivable transferred to HCFA in fiscal year 1998 totaled $59.8 million; of that amount, HCFA collected $2.1 million as of June 2000—a 3.5 percent collection rate. When contractors are unable to offset payment or otherwise recover an overpayment from a provider, they are supposed to refer the receivable to their respective HCFA regional offices for review. If regional office staff find that the contractor has taken all appropriate collection actions, the contractor may transfer the receivable to HCFA, which then assumes responsibility for collection through its regional offices. We found that contractors were not always referring receivables appropriately to HCFA regional offices, thus preventing it from initiating its own collection activities. Although contractors are supposed to refer uncollected debt to HCFA according to time frames set out by HCFA’s regional offices, we found that this does not always happen. For example, one contractor we visited told us that referring receivables to its HCFA regional office was a relatively low priority because of the unlikelihood that HCFA would be able to collect them. As a result of untimely referral, the chances that HCFA could collect the debt are substantially reduced because the debt becomes increasingly delinquent. Another problem related to contractor referrals is inconsistent regional office guidance to contractors. We found that in one HCFA region, contractors were asked to refer receivables less than $1,000 to be written off.At another region, the referral threshold was $50 and the region made the decision whether to write off the debt or pursue collection. Within HCFA, only the regional offices have the authority to authorize the writeoff of government debt. They are allowed to exercise this authority consistent with their judgment regarding the cost-effectiveness and potential for recovery of these debts. Finally, we found that the various systems HCFA regional offices used to track and report transferred receivables were neither consistent nor reliable. For example, one region provided us with current information on the status of each receivable; however, because many of the overpayments did not exceed $600, they were not included in HCFA’s automated tracking system and, as a result, the information had to be compiled for us manually. Another region generated detailed computer data for its Part A receivables, including information on outstanding balances, collection activities, and notes about their collection status. However, this same region could not provide any information on the status of Part B accounts receivable due to problems with its computer files. Because of problems like these, HCFA does not have reliable data on how well it collects debt transferred to it by contractors. Ineffective tracking and reporting systems may also result in receivables being recorded in error. For example, in our report on the identification and collection of overpayments from closed HHAs, we noted that a contractor made a $76.9 million keypunch error when entering overpayment information into one of HCFA’s central overpayment recording systems. Further, we noted that ineffective management of Medicare accounts receivable was found to be a consistent problem in HCFA’s financial statement audits for fiscal years 1996 through 1999.The fiscal year 1998 audit, for example, disclosed deficiencies in nearly all aspects of HCFA’s accounts receivable activity including the lack of an integrated financial management system to track overpayments and their collections, as well as inadequate procedures for ensuring that receivables were valid. HCFA’s fiscal year 1999 financial statement audit report noted that despite significant improvements, controls over accounts receivable continued to be a material weakness.HCFA has plans to replace its fragmented accounts receivable tracking and reporting systems with a single integrated one, but this will not be implemented until September 2001 at the earliest. In March 2000, we made a number of recommendations to the HCFA Administrator to improve financial management and accountability of the Medicare program, including that HCFA develop a comprehensive financial management improvement strategy. HCFA agreed with our recommendations and committed itself to aggressively address shortcomings in its financial management of Medicare. For example, in an effort to clean up its financial records in preparation for its fiscal year 1999 financial statement audit, HCFA initiated a one-time project to write down its oldest delinquent receivables. This effort resulted in HCFA’s and its contractors’ writing off delinquent receivables that were at least 6 years old. Over $3 billion was written off by HCFA and its contractors in fiscal year 1999. Receivables less than 6 years old, however, remain on the financial records; whether they are collectable remains in question. DCIA mandates that HCFA and other federal agencies refer all eligible debt over 180 days delinquent to the Treasury or a Treasury-designated debt collection center for collection activities. In 1999, Treasury granted a waiver for HHS to continue to service certain debts including MSP debt and debt related to unfiled cost reports and to be designated as a debt collection center for that purpose.Within HHS, the Program Support Center is the Treasury-designated debt collection center.As such, it is responsible for attempting to collect referred debt related to MSP or unfiled cost reports, and for referring all other types of Medicare debt to the Treasury for collection. At Treasury, collection can be attempted through Treasury’s offset program and various other debt collection tools, such as referral to private collection agencies.Although the act is now over 4 years old, HCFA has not fully implemented DCIA and will not be referring all eligible receivables until the end of fiscal year 2002, at the earliest, because of the work involved in certifying that the debt amount is correct and still outstanding. To improve management of its accounts receivable, HCFA analyzed its debt, performed a one-time writeoff of very old debt, and established two pilot projects to refer eligible debt to the Program Support Center. To address the issue of aging receivables, HCFA wrote off all debt that it determined could not be offset and that was more than 6 years old, and referred the remainder of the 6-year-old (or older) debt to the Program Support Center for collection. This was done in preparation for its annual financial statement audit. As mentioned earlier, HCFA’s objective for its two pilot projects was to develop a process for contractors to expedite the transfer of eligible Part A debt to the Program Support Center. HCFA concentrated its pilots on older, high-value Part A debt rather than newly eligible delinquent debt. One pilot deals with MSP debt valued at more than $5,000 or more and is up to 6 years old, while the other pilot deals with non-MSP debt, primarily related to cost report audits, of $100,000 or more. Under the pilots, contractors record eligible delinquent debt in a HCFA central office database that is used to transmit the debt to the Program Support Center for collection. Before they refer this debt to the Program Support Center, contractors must validate it by reviewing records and ensuring that the debt is still uncollected and that the debt balance is correct. Validation is necessary because HCFA’s systems for recording and tracking overpayments are unreliable. In addition, contractors must seek to collect the receivable by issuing a demand letter that indicates that nonpayment will result in referral to the Program Support Center. HCFA plans to expand its non-MSP pilot to all of its intermediaries and carriers by October 2000; the time frame for expanding the other pilot is uncertain. However, even under its planned efforts, certain types of eligible debt will be excluded. For example, HCFA has established a $600 threshold for transferring non-MSP debt and tentatively plans to continue with its $5,000 threshold for MSP debt, even though the Program Support Center will accept any eligible debt over $25. This leaves a large amount of almost half of all Part B delinquent debt, for example with no avenue for collection beyond the contractors’ current techniques. This also means that delinquent debt newly eligible for transfer below these thresholds will not qualify for referral. HCFA officials told us that they selected the $600 and $5,000 thresholds in part because they do not have the resources necessary to validate the large volume of aged, delinquent debt below these amounts. However, collection industry statistics as well as Treasury’s collection experience to date have shown that collection rates are generally higher on debts with smaller dollar balances and debts that are less delinquent. The Program Support Center and Treasury use many standard collection techniques in their debt recovery efforts, such as attempting to locate debtors that have ceased operations, issuing demand letters, and reporting information to credit bureaus. They also refer debt to the Department of Justice for litigation, and to Treasury’s offset program, where certain federal agency payments can be offset to satisfy claims. To assist them, both the Program Support Center and Treasury contract with private collection agencies that are paid contingency fees for their collection efforts. For example, for their collection services, the Program Support Center retains 15 percent of the amount collected and the remainder is returned to the Medicare Trust Funds. HCFA is not charged for debt that has been transferred but cannot be collected. It is too early to evaluate the effectiveness of Treasury and the Program Support Center in collecting Medicare debt transferred to them. However, they have been able to collect some aged, delinquent debt for which HCFA and its contractors had terminated active collection action 1999, HCFA transferred $341 million in delinquent debt to the Program Support Center of which $1.8 million has been collected. We did not find potential for HCFA to improve collections of Medicare overpayments through the use of recovery auditors. Both the claims administration contractors and the recovery auditors we spoke with use the same basic technique of initiating collections by issuing a demand letter to providers. Demand letters provide details regarding the overpayment, and request prompt payment from the provider. For providers still in the Medicare program, claims administration contractors can simply withhold funds from future payments. Other collections by both the contractors and recovery auditors depend on the debtor’s sending a check. The techniques that recovery auditors use would not provide HCFA with collection techniques that differ from those provided by the Treasury and the Program Support Center. Both already use private collection agencies to assist them in collecting delinquent debt. In addition, the Treasury can offset providers’ overpayments against certain other federal agency payments—a process not available to recovery auditors. As previously mentioned, recovery audit techniques are, for the most part, no different from the techniques currently being used in Medicare’s program safeguard activities. However, there is evidence that HCFA could do more—either in-house, or through its contractors—to identify overpayments, if it had additional resources. The concept of using recovery auditors to help HCFA achieve its program integrity goals has been the subject of controversy. Specific concerns relate to how to compensate recovery auditors, the possible administrative burden that would be placed on the current claims administration contractors and providers, responsibilities to coordinate with law enforcement agencies and to ensure beneficiary privacy, and HCFA’s ability to effectively manage this new set of contractors. These concerns are discussed below. We found that HCFA is already addressing most of these same challenges as it implements its PSC activities under MIP. These contractors, like recovery auditors, are intended to provide HCFA with new tools and capabilities to protect Medicare from overpayments. Arguably the most contentious issue regarding Medicare’s use of recovery auditing services relates to compensation. Recovery auditors are typically paid a contingency fee by private-sector clients, based on a percentage of the identified overpayments collected. One of the advantages of using recovery auditing services on a contingency fee basis is that additional appropriations are not needed to pay these organizations because the payment comes out of recoveries. However, HCFA and medical associations believe that providers view contingency fees as a “bounty” system that can damage the constructive partnership between Medicare and its providers. The contingency fee is a strong incentive to identify and collect overpayments and may lead to inappropriate identification and collection efforts by recovery auditors. A HCFA official noted that providers have raised a similar concern about the very modest reward available under HIPAA to beneficiaries who uncover fraud. Instead of paying recovery auditors on a contingency fee basis, HCFA could compensate them in ways similar to the ways it pays PSCs, including firm- fixed-price or cost-plus-award-fee contracts. A firm-fixed-price contract provides for a predetermined payment to the contractor. Payments are not subject to adjustment based on the contractor’s costs; in fact, there are strong incentives for the contractor to control costs. A cost-plus-award-fee contract provides for reimbursement of actual costs and can include incentives for efficiency or performance. HCFA is using each type of contract in its PSC task orders. Several of the recovery auditors we met with noted that they would accept other methods of payment besides contingency fees and thus may be agreeable to providing their services under one or more of these different types of contracts. The potential administrative burden that recovery auditors would place on current claims administration contractors and providers has also been raised as a concern similar to the situation HCFA faces as it integrates the PSCs into Medicare’s integrity activities. HCFA and claims administration contractor representatives believe recovery auditors would likely generate additional inquiries and appeals due to providers and beneficiaries challenging overpayment decisions, thereby increasing workload and costs of the claims administration contractors. They also expressed concern that the claims administration contractors would need to divert staff from their normal activities to provide recovery audit staff with information about the claims administration contractors’ operations, including local medical policies that define the conditions under which the contractor will pay for certain services. Recovery auditors would need to understand the local medical policies when making determinations on whether the claims were paid properly. Furthermore, the claims administration contractors and recovery auditors would need to share data, requiring development of coordination procedures. Providers, too, are concerned about the potential administrative burden. Representatives from two medical associations said that they were concerned that recovery auditors working in Medicare would request excessive numbers of medical records from providers, thereby adversely affecting providers’office operations. Recovery auditor representatives told us that they do not believe they place excessive demands on providers for medical records and other information and that they are sensitive to this concern. Representatives from one recovery auditing organization noted that initial identification of physician overpayments often involves computer analysis of claims data, not a review of medical records. As for institutional providers such as hospitals, representatives of a recovery audit firm and a recovery auditor client said these providers routinely allow auditors from private insurance companies to review medical records at their facilities. Two other issues associated with hiring recovery auditors are coordination with law enforcement agencies and maintenance of patient information confidentiality. Representatives from some claims administration contractors said that their fraud units closely coordinate their investigations with the HHS OIG and Justice, and that they were concerned that recovery auditors would inadvertently hinder their investigations. Several medical specialty group representatives also raised the issue of confidentiality of patient information, and questioned how HCFA would ensure that recovery auditors did not release such information or use it for unauthorized purposes. HCFA has made some provisions for both of these concerns as it implements its program safeguard contracts. PSCs are required to provide the OIG, Justice, and the Federal Bureau of Investigation with information related to potential fraud cases and may also periodically meet with law enforcement agencies to coordinate ongoing work. In regard to privacy concerns, the Statement of Work requires the PSCs to comply with the Privacy Act of 1974 and applicable HHS regulations relating to information security. A final issue regarding Medicare’s use of recovery auditors concerns HCFA’s ability to effectively manage and oversee these organizations, given its other oversight responsibilities. We have reported numerous problems with HCFA’s ability to effectively manage and oversee its current claims administration contractors. Medical association representatives note that HCFA has a number of other program safeguard projects under way and have suggested that HCFA evaluate these results before undertaking any new initiatives. HCFA acknowledges that its oversight of the claims administration contractors can be improved and is taking various steps toward that goal. To ensure continual oversight of its PSCs, HCFA has already established a performance evaluation program. Because the PSCs have been operating for less than a year, HCFA has not formally evaluated any of them, but plans to assess the PSCs’ performance annually against the statement of work and applicable task order requirements. Instead of establishing a new contracting structure for recovery auditors, if their expertise is needed, HCFA could contract with them in the context of its PSC initiative. In fact, a PSC teamed with a recovery auditor we met with to bid on one of HCFA’s task orders; HCFA, however, selected another PSC to perform this work. Although Medicare’s claims administration contractors already have extensive prepayment safeguards in place to help ensure that claims are paid correctly, Medicare continues to make overpayments totaling billions of dollars annually. With its recent emphasis on prepayment review, HCFA is justifiably spending much of its limited program safeguard funds on identifying erroneous or improper claims before they are paid. However, thousands of these claims are paid nonetheless, and if not identified during postpayment reviews, they ultimately are lost in a program that can ill afford the financial drain on the trust funds. In an effort to tackle its outstanding overpayment problem, some have suggested that HCFA use the services of recovery auditors to supplement its program integrity activities. We attempted to identify any techniques or tools used by recovery auditors that could also benefit HCFA. While recovery audit firms we met with have achieved results on behalf of their private insurance and Medicaid clients, their techniques are similar to HCFA’s current postpayment tools. That is not to say that Medicare could not benefit from a stronger focus on postpayment activities to identify additional overpayments. While we believe that HCFA’s effort to balance prepayment and postpayment activities is sound, more could be done to safeguard Medicare if additional resources were available. Since its enactment, HIPAA has provided HCFA with increased and assured funding for program safeguards, but funding is still less than what was available in 1989 on a per-claim basis. While this funding is due to increase in the next several years, it will only keep pace with expected growth in program expenditures and amounts to little more than one-quarter of 1 percent of program expenditures. According to HCFA estimates, more than $17 is saved for every $1 invested in safeguarding Medicare through MIP. We believe that HCFA must target any new resources to the activities most likely to result in the greatest payoff. We understand that HCFA is developing a process to determine the return on its prepayment and postpayment activities broken down by contractor and activity which, when completed, should give it greater ability to perform this targeting. For its MSP activities, HCFA could more effectively identify beneficiaries who have other primary insurance coverage if insurers were required to share information on their enrollees. HCFA has had difficulty gaining insurers’ cooperation and has had to sue a number of them to enforce MSP requirements. If insurers were required to share enrollee information with HCFA, HCFA could match more current data, and in a more efficient manner, than its present data matches. However, such a step would require congressional action. In regard to collecting overpayments once they have been identified, we found that HCFA’s claims administration contractors do a fairly good job when providers’ current payments can be offset to collect previous overpayments. However, HCFA’s delinquent receivables are a continuing problem. The longer debt ages, the more difficult it is to collect. HCFA’s practice of offsetting overpayments with future payments gives it leverage that accounts for much of its collection success, but this option is typically not available on older debt, because the provider may no longer be in business, be participating in Medicare, or even be located. We believe HCFA could collect more of its older debt if it fully implemented the DCIA. HCFA is not transferring all its eligible debt to HHS’ Program Support Center in a timely manner. Under its two pilot projects, HCFA is focusing on transferring some Part A debt of significant value that may be as old as 6 years; its plans to expand the pilots to all contractors will still exclude debt that falls below HCFA’s minimum thresholds. While in some cases the cost of validating debt may exceed the amount collected, this may be more applicable to very old debt that requires more extensive validation efforts than to newly eligible debt. It does not seem reasonable to wait until the end of fiscal year 2002 at the earliest to transfer all eligible debt, as HCFA plans. Rather, transferring delinquent debt as soon as it becomes eligible could result in a greater payback to Medicare; the sooner it is transferred, the more likely that Treasury or the Program Support Center will be able to collect. Treasury and the Program Support Center use collection techniques that are similar to, and may be better than, those available from recovery auditors. As a result, we did not identify any additional role for recovery auditors in Medicare’s overpayment collection activities. We do not believe that recovery auditors offer unique benefits to either identify or collect overpayments that are not available from HCFA’s current contractors and contracting arrangements. Recovery audit firms whose postpayment expertise coincides with HCFA’s needs presumably could perform their work on specific PSC task orders or for a specified contract. HCFA could gain the benefit of their expertise without changing its existing and accepted payment safeguard contracting structure. In regard to contracting with recovery auditors, providers and others have raised concerns about issues such as compensation, administrative burden, coordination with law enforcement, privacy protection, and contractor oversight. HCFA is working on addressing these same issues as it implements its PSC initiative. The Congress should consider increasing HCFA’s MIP funds to allow an expansion of postpayment and other effective program safeguard activities, and require HCFA to report on the financial returns from these and other program safeguard investments. Because HCFA has had difficulties gaining the cooperation of health insurers in identifying beneficiaries covered by other insurance under the Medicare Secondary Payer Program, the Congress should consider requiring all private health insurers to comply with HCFA requests for the names and identifying information of their enrolled beneficiaries. To improve overpayment identification and collection, we recommend that the Administrator of HCFA require that the effectiveness of prepayment and postpayment activities be evaluated to determine the relative benefits of various prepayment and postpayment safeguards. In addition, the Administrator should require that all debt be transferred to HHS’ Program Support Center for collection or referral to Treasury for collection as soon as it becomes delinquent and is determined to be eligible for transfer. For its current backlog of debt that is determined to be eligible, HCFA should validate and refer such debt to HHS’ Program Support Center as quickly as possible. In commenting on this report, HCFA agreed with our matters for consideration by the Congress and our recommendations to the agency (see app. I). HCFA noted that it has taken many actions to address program safeguard and management weaknesses reported by GAO and the HHS OIG. It outlined a number of activities under way to better prevent overpayments from occurring and to identify those that have occurred. In response to our matters for consideration of the Congress, HCFA stated that having additional funding under MIP would allow HCFA and its contractors to expand their range of program safeguard activities. In addition, HCFA stated that requiring insurers to comply with its requests for names and identifying information on enrolled beneficiaries would help the agency more effectively identify beneficiaries with other primary insurance coverage. In regard to our recommendations to HCFA, the agency agreed with our recommendation that it should evaluate the relative benefits and effectiveness of specific prepayment and postpayment tools, and indicated that a PSC contractor will begin to assist HCFA with this activity soon. Further, HCFA agreed with our recommendation to transfer all debt to the HHS Program Support Center for collection or referral to Treasury for collection as soon as it becomes delinquent and is determined to be eligible. The agency noted that it had begun the process of transferring its debt nearly 18 months ago and has hired additional staff in both its central and regional offices to focus on debt referral activities. We acknowledge HCFA’s efforts, but believe that it should broaden its focus to transferring debt as it becomes delinquent, rather than solely clearing out its backlog of aged receivables, because debt that has only recently become delinquent should be simpler to validate and is generally easier to collect. HCFA also suggested technical changes to the report, which we incorporated where appropriate. We are sending copies of this report to the Honorable Nancy-Ann Min DeParle, Administrator of the Health Care Financing Administration, appropriate congressional committees and subcommittees, and other interested parties. We will also make copies available to others on request. If you have any questions regarding this report, or if we can be of further assistance, please call me at (312) 220-7600 or Sheila Avruch at (202) 512- 7277. Other major contributors to this report are listed in app. II. In addition to those listed above, Kay Daly, Robert Dee, Anna Kelley, James Kernen, Wayne Marsh, Frank Putallaz, and Suzanne Rubins made key contributions to this report. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Ordersbymail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Ordersbyvisiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Ordersbyphone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
Pursuant to a congressional request, GAO provided information on efforts to recover Medicare's overpayments, focusing on: (1) how the Health Care Financing Administration (HCFA) and its contractors identify potential overpayments, and whether techniques used by recovery auditors would improve overpayment identification; (2) how well HCFA and its contractors collect overpayments once they are identified, and whether the services of recovery auditors would improve HCFA collection efforts; and (3) what challenges HCFA would face if it were required to hire recovery auditors to augment its overpayment identification and collection activities. GAO noted that: (1) despite HCFA's efforts to pay claims correctly in its $167 billion fee-for-service Medicare program, several billions of dollars in Medicare overpayments occur each year; (2) it is therefore critical that HCFA undertake effective postpayment activities to identify overpayments expeditiously; (3) HCFA's claims administration contractors use several postpayment techniques to identify overpayments; (4) these include medical review to ensure reports for providers that are paid on the basis of their costs, and reviews to determine if another entity besides Medicare has primary payment responsibility; (5) the contractors identify and collect billions of dollars through these activities, but how well each contractor performs them is not clear because HCFA lacks the information it needs to measure the effectiveness of contractors' overpayment identification activities; (6) while recovery auditors may also save money for clients, such as state Medicaid agencies, by identifying overpayments, the identification techniques they use are generally similar to those already used by HCFA and its contractors; (7) this does not mean that HCFA could not benefit from a stronger focus on specific postpayment activities; (8) however, doing so may require additional program safeguard funding so as not to shift funds away from HCFA's other efforts, such as prepayment review to prevent overpayments; (9) Congress has given HCFA assured funding for program safeguard activities; (10) however, the funding level is about one-third less than it was in 1989 and, although it will increase until 2003, it will only keep pace with expected growth in Medicare expenditures; (11) for fiscal year 1999, based on HCFA estimates, the Medicare Integrity Program saved the Medicare program more than $17 for each dollar spent--about 55 percent from prepayment activities and the rest from postpayment activities; (12) because these activities can bring a positive return, GAO suggests that Congress consider increasing HCFA's funding to bolster its postpayment review program; (13) HCFA plans to expand its pilot projects from some to all of its claims administration contractors; and (14) however, it has established minimum thresholds for referrals for collection that are higher than the Department of the Treasury and debt collection center will accept because HCFA says that it does not have the resources needed to pursue collection on the large volume of debt below its thresholds.
RNs, along with other nursing support staff, provide care to patients on inpatient units at VAMCs. VA RNs are responsible for assessing and providing care to patients, administering medications, documenting patients’ medical conditions, analyzing test results, and operating medical equipment. To obtain an RN license, an individual must complete a nursing education program, meet state licensing requirements, and pass a nursing licensing examination. Several types of clinical and ancillary support staff assist RNs in caring for patients on inpatient units. Nursing support staff— such as licensed practical nurses (LPN) and nursing assistants (NA)— perform nursing duties such as recording patient vital signs and assisting with bathing, dressing, and personal hygiene. In addition, ancillary support staff perform housekeeping, patient transport, and food service duties. Clinical support staff—such as lab technicians—also assist RNs in their patient care duties, for example, by drawing and testing blood or performing electrocardiograms (EKG). Recognizing that developing and maintaining a strong cadre of RNs at VAMCs is vital to providing high quality of care to our nation’s veterans, the Congress and VA have both made efforts to better ensure that RN staffing levels at VAMCs are appropriate and to enhance recruiting and retention of RNs. These efforts include: To improve RN staffing, retention and job satisfaction, and patient outcomes, VA is actively encouraging its medical centers to take part in a nationwide program called the Magnet Recognition Program®. This program was developed by the American Nurses Credentialing Center (ANCC) to recognize health care organizations that provide nursing excellence and quality patient care. In order to attain Magnet™ status, hospitals must meet certain requirements, including requirements related to staffing practices and quality monitoring. Some research indicates that facilities that have attained Magnet™ status have better patient outcomes, significantly higher percentages of baccalaureate-prepared nurses, and higher nurse job satisfaction rates. As of 2008, three VAMCs have attained Magnet™ status, four VAMCs have completed the application process, and 22 VAMCs are in the process of applying for Magnet™ status. With respect to recruiting and hiring, VA has taken several actions. For example, in 2007, VA launched the VA Nursing Academy, a program designed to develop a pool of RN candidates for employment in VAMCs. Similarly, VA’s Learning Opportunities Residency program is designed to attract baccalaureate nursing students to work as RNs at VAMCs upon graduation. VA also has several other initiatives to enhance the educational preparation of its health care staff and scholarships for current employees pursuing degrees in nursing. These initiatives, which serve as recruitment and retention tools, include an education loan repayment program and scholarships for employees seeking health care careers. In August 2007, a VA Recruitment Process Redesign Workgroup made recommendations to redesign the recruiting and hiring of health care practitioners within VA, including RNs. The work group analyzed VA’s hiring process and identified barriers and delays in hiring. The work group’s findings and recommendations included a timeline for nurse hiring. As discussed later in this report, VA is in the process of implementing actions recommended by this work group. To enhance recruiting and retention of RNs at VAMCs, the Congress passed legislation in 2004 authorizing two alternate work schedules for RNs employed by the VA. One of these alternate schedules allows RNs to work three 12-hour shifts that are considered a 40-hour work week for pay and benefits purposes. The other alternate work schedule allows RNs to work full-time for 9 months with 3 months off duty within a fiscal year and be paid 75 percent of the full-time work rate for each pay period of that fiscal year. In addition to these alternate work schedules, executive branch government agencies—including the VA—are authorized by OPM to offer flexible work schedules. A flexible work schedule is an 80-hour biweekly basic work requirement that allows an employee to determine his or her own schedule—arrival and departure times—within the limits set by the agency. The standard work schedule for full-time VA employees is ten 8-hour work days within a 2-week period. VAMC nursing officials reported that although VA RNs are required to input patient data into VA’s PCS, many said they cannot rely on the information generated by PCS because the PCS is outdated and inaccurate. Because of the shortcomings of VA’s PCS, nurse managers use various other data to help set RN staffing levels for their inpatient units, such as historical staffing levels and benchmarking RN staffing levels to inpatient units in hospitals with similar characteristics. VA is proposing action to develop a new nurse staffing system but did not provide a detailed action plan and milestones for building and implementing such a system. VAMC nursing officials we interviewed told us that VA’s PCS does not generate reliable information that would allow them to better predict the RN staffing levels required for their inpatient units. These nursing officials cited two key limitations of this information—it is outdated and inaccurate. VA headquarters officials in the Office of Nursing Services (ONS) and VA’s OIG also reported that PCS has significant limitations. In 2004, VA’s OIG also recommended that VA develop a new standardized nurse staffing methodology capable of accurate staffing estimates and VA concurred with the OIG’s recommendation. According to nursing officials we interviewed, VA’s PCS was developed in the 1980’s based on time and motion studies of RNs conducted over 20 years ago; as a result, the information the system produces does not account for all the tasks currently performed by RNs on inpatient units. For example, VA’s PCS does not account for certain recent RN tasks—such as the administration of certain intravenous medications or monitoring of a patient’s abnormal heart activity—that were once limited largely to intensive care units that cater to sicker patients but are now performed on other inpatient units. Similarly, VA’s PCS generates estimates that do not reflect tasks associated with VA’s computerized bar code medication administration (BCMA) system that was fully implemented in 2003, more than a decade after the development of VA’s PCS. These RN tasks include tracking, monitoring, and reporting medication administration performed using the BCMA on an inpatient unit. VAMC nursing officials also told us that VA’s PCS produces inaccurate data with respect to patient acuity levels, which in turn can generate erroneous HPPD estimates. Specifically, a key piece of data VA nurses enter into the VA PCS is the acuity level for each patient on an inpatient unit. To do this, RNs use one of five PCS categories, with category 1 representing patients requiring the lowest level of care and category 5 the highest level of care. Nursing officials we interviewed at VAMCs we visited and officials with VA’s ONS reported that VA’s PCS does not accurately capture the actual acuity level of patients on inpatient units According to VAMC nursing officials we interviewed, nursing staff at VAMCs are required to classify patients by acuity level on a daily basis using VA’s PCS. However, nursing officials reported that classifying patients by acuity level using the PCS is not a productive use of their time because the information output from the PCS is not useful for RN staffing purposes. In addition, officials with the VA OIG and ONS told us that the information contained in VA’s Computerized Patient Record System (CPRS)—concerning a patient’s illness, medical condition, and treatments—is not integrated with, or available within VA’s PCS when nurses assess and assign patients to one of the five acuity levels. VA’s ONS is proposing to convene an interdisciplinary team—consisting of headquarters and field staff—to develop a more effective RN staffing system for VA by 2012, according to VA’s Chief of Nursing Services. The Chief told us that the new RN staffing system will include a database that reflects up-to-date nursing tasks as well as information from patients’ computerized medical records, and that this database will also be used to evaluate the effectiveness of nursing care at VAMCs. The Chief also reported that as part of the new RN staffing system, VA may upgrade or replace its PCS. In developing a new or upgraded PCS, VA needs to ensure that all current nursing tasks and patient acuity are accurately captured. However, ONS did not provide a VA charter for the interdisciplinary team or a detailed action plan with specific timelines for the building, testing, and implementation of an updated system for staffing RNs on inpatient units in its VAMCs. Instead of relying on the information generated by VA’s PCS, VAMCs use various other data to determine RN staffing for inpatient units. Our survey of nurse executives coupled with our visits to VAMCs provides insights into the types of data used and shows that information used for staffing RNs varies considerably among VAMCs. Results from the survey show that nurse managers typically consider a combination of data to estimate both the number of RNs and the RN skill levels their inpatient units require. The survey results also show that the types of data most commonly used by inpatient unit nurse managers across VAMCs are the average number of patients typically cared for on the unit, HPPDs, the acuity level of patients on the unit, the number of RN staff historically assigned to the unit, and the ratio of RNs to patients on the unit. Nurse managers at the eight VAMCs we visited told us how they use various data to help set RN staffing levels for their inpatient units. At four VAMCs we visited, nurse managers told us that they set RN staffing levels for their inpatient units by adhering to the historical staffing levels that had been established for the units. According to these nurse managers, they inherited their RN staffing levels when they assumed their position as manager of the unit. These nurse managers told us that the staffing levels for their inpatient units were established more than a decade ago. Nurse managers at another facility we visited told us that they consider data on the number of patients on the unit, HPPDs, nurse-to-patient ratios, and historic staffing levels to estimate the RN staffing needs on their units. Nurse managers using historical RN staffing levels to set current RN levels told us that this method does not adequately match RN staffing levels to the needs of inpatient units. Nurse managers at the four VAMCs that use such historical data said that historical RN staffing levels had not matched the acuity levels of their patients, which has increased over time. The other three VAMCs we visited have attained Magnet™ status and are required to set their RN staffing levels by benchmarking them against data on RN staffing levels found in non-VA facilities that have attained Magnet™ status. VA and non-VA Magnet™ facilities are grouped for benchmarking based on inpatient units with similar characteristics. Magnet™ facility RN-staffing data are available to facilities participating in the Magnet Recognition Program®. The nurse executive at one Magnet™ VAMC that benchmarks told us that it had not experienced RN staffing problems, and unit nurse managers at this VAMC expressed general satisfaction with RN staffing levels. VAMC nursing officials and inpatient RNs reported that two main factors adversely impact RN’s job satisfaction and ultimately could impact VA’s ability to retain RNs. First, according to these groups, some inpatient RNs are dissatisfied about spending too much time performing non-nursing duties, such as cleaning beds after a patient is discharged or answering unit telephones. Second, even though VAMCs were authorized in 2004 to offer RNs two alternate work schedules, few nurse executives reported offering these schedules; as a consequence, few RNs work these schedules. Both nursing officials and inpatient RNs working on inpatient units told us that the limited availability of flexible and alternate work schedules affects the ability of RNs to balance work and personal commitments. In addition to these two main factors, inpatient RNs cited other factors affecting retention, such as reliance on supplemental staffing strategies, for example RN overtime, and insufficient professional development opportunities. VAMC nursing officials at five VAMCs we visited reported that RNs on inpatient units routinely perform non-nursing tasks, such as housekeeping tasks and transporting patients to other areas of the hospital for tests, because inpatient units often lack ancillary support staff or nursing support staff to perform these tasks. In 2001, we reported that job dissatisfaction because of inadequate support staff was reported to be a major contributor to retention problems in the nursing workforce. According to VAMC nurse executives we surveyed many of their VAMCs did not have access to ancillary support services around the clock. For example, as table 1 shows, the percentage of nurse executives at VAMCs who had support staff available around the clock ranged from 13 percent—for staff available to administer electrocardiograms—to 53 percent, for staff available to draw and test samples. Nursing officials at VAMCs we visited indicated that RNs performing non- nursing tasks reduces RN job satisfaction and has caused some RNs to leave VA to accept jobs at other hospitals where RNs are required to perform fewer non-nursing tasks. Nursing officials also reported that RNs prefer to focus on providing nursing care to patients and that RNs performing non-nursing tasks could adversely affect the retention of RNs on inpatient units. Nursing officials from VAMCs and a representative of a state hospital association we interviewed cited three main factors—budgetary constraints, institutional practices, and retention and recruiting—that contributed to insufficient ancillary and nursing support staff to assist RN on inpatient units. Budgetary constraints can delay hiring ancillary and nursing support staff mainly through hiring freezes or lags. A VA official told us that hiring freezes and hiring lags are used for budgetary reasons or to manage personnel costs. According to nursing officials, a hiring freeze may be initiated by the regional network or imposed by a single VA medical center. During a hiring freeze, inpatient units typically require authority from a VAMC resource board to fill a vacant position. Institutional practices at some VAMCs lead to some categories of ancillary and support staff’s being unavailable during evening or weekend shifts, resulting in the need for RNs to perform additional tasks during these shifts. For example, housekeeping staff and laboratory staff who draw blood samples are not always available. In other cases, support staff do not perform all of the tasks associated with a certain function, resulting in the need for RNs to perform the tasks. For example, patient escort staff do not always assist in getting patients onto a stretcher or into a wheelchair for transport. Recruiting and retaining ancillary and support staff can be difficult because a limited supply of support staff can lead to competition among local hospitals. According to a representative of a state hospital association we interviewed, there is a national shortage of allied health professionals—such as NAs, clinical laboratory technicians, radiology technicians, and physical therapists—in the hospital setting that can affect the workload of RNs. VAMC nursing officials we interviewed and inpatient RNs who attended our focus groups at VAMCs reported that the limited availability and use of alternate and flexible work schedules at VAMCs limit the ability of RNs to balance their work and personal life needs and could adversely impact the retention of RNs. In 2008, we reported on the importance of work schedules that offer flexibility being available for older employees, who are nearing or are at the age when they may consider retirement as an incentive to remain working. While VA received legal authority in 2004 to offer alternate work schedules to RNs, these schedules are rarely used at VAMCs. Available 2007 data from VA show that less than half of one percent of the approximately 43,000 RNs employed by VA use alternate work schedules. There is low usage mainly because inpatient units at VAMCs do not usually offer such alternate work schedules. According to our survey of nurse executives, one alternate work schedule—36 hours per week—was reported by only 1 percent of surgical, mental health, medical, polytrauma, and intensive care units, while the second alternate work schedule—working full time for 9 months with 3 month off duty—was not offered at all. Several nursing officials we interviewed noted that not offering alternate work schedules can be a deterrent to retaining RNs. Half of all nurse executives reported that the lack of alternate and flexible schedules at their VAMC was one of the primary reasons for difficulty competing with local hospitals in recruiting and retaining RNs. VAMC nursing officials noted, however, that the ability to implement alternate work schedules at their VAMCs may be constrained by factors such as limited RN staffing. Flexible work schedules are more widely available than alternate work schedules at VAMCs. VAMCs offer several types of flexible work schedules—such as 10 and 12-hour schedules—and the availability of these flexible work schedules vary by the type of inpatient unit. Nurse executives we surveyed reported that the most frequently used flexible schedule was the 12-hour schedule, which was reported for 68 percent of medical intensive care and critical care units and 30 percent for nursing home units. Other flexible work schedules were used less frequently: for example, according to nurse executives we surveyed the use of the 10-hour schedule was reported by 13 percent of medical units and only 1 percent of spinal cord injury units. As was the case with alternate work schedules, several nursing officials we interviewed noted that the ability to implement flexible work schedules at their VAMCs was constrained by the number of RNs available to cover the various shifts. According to VAMC nursing officials, offering flexible work schedules is an important factor in recruiting and retaining RNs. Half of VA nurse executives we surveyed reported that one of the primary reasons for the difficulty in competing with local hospitals in retaining inpatient RNs was that flexible work schedules were not offered on some units at their medical center. We were told that many private hospitals use flexible work schedules as a way to improve nurse retention. A nursing official reported that a survey by the American Organization of Nurse Executives—a professional organization for nurse leaders and executives—found that after salary, the top benefit desired by nurses was flexible work schedules. One state hospital association representative we interviewed reported that 42 percent of the hospitals surveyed in their state offered flexible work schedules. VAMC nursing officials and inpatient RNs cited other factors that could adversely impact job satisfaction, and ultimately, the retention of RNs at VAMCs, including reliance on supplemental staffing strategies and insufficient professional development opportunities. A reliance on supplemental staffing strategies, such as RN overtime because of inadequate RN staffing levels on their unit, are factors that could adversely impact RN job satisfaction and ultimately retention. For example, when there is an unplanned absence, nurse managers use supplemental staffing strategies or operate the units short-staffed. Forty– eight percent of nurse executives reported that inpatient units worked short-staffed at some point, and 41 percent of nurse executives reported that mandatory RN overtime was used as a supplemental staffing strategy. Nurse managers reported that in some instances, they get an RN to work a part of the next shift or the entire next shift, or float staff as a result of a staff vacancy, staffing shortages, or an increase in the number of inpatients on the unit. Moreover, one nurse manager reported that “burnout” can stem from a reliance on supplemental staffing strategies. According to RNs who attended our focus groups, insufficient professional development and training opportunities for inpatient RNs are RN-retention issues. For example, inpatient RNs noted that access to training and professional development activities for RNs can be limited. For example, RNs on the evening and night shifts sometimes find it difficult to participate in professional development activities and education programs. VAMC nursing officials we surveyed and interviewed reported that delays resulting from limitations in VA’s hiring process and hiring freezes and lags at VAMCs can often discourage prospective RN candidates from seeking or following through on applications for employment at VAMCs. Although VA has recently taken steps to address some of the factors that contribute to RN hiring delays, it is too early to determine the extent to which these steps have been effective in reducing hiring delays. VAMC nurse executives we surveyed and nursing officials we interviewed identified limitations in VA’s hiring process. Nursing officials identified three areas of limitations—delays in securing necessary approvals from medical center resource boards to fill RN vacancies; poor coordination between nursing and HR officials involved in hiring; and a shortage of experienced and well-trained HR officials. Collectively, these factors result in significant delays in filling RN vacancies. We surveyed VAMC nurse executives to estimate the time it typically takes VAMCs to fill RN vacancies and found that 44 percent reported it took 45 to 80 days to fill inpatient RN vacancies at VAMCs in 2007 compared to the 24- to 45-day target timelines that VA set in 2007. One-third of nurse executives we surveyed reported that it took more than 80 days to fill RN vacancies at their VAMCs. In contrast, local hospitals usually hired RNs in less than 21 days, according to nursing officials we interviewed. Nursing officials during our site visits reported that these delays contribute to VAMCs’ losing applicants to local hospitals as well as reliance on supplemental staffing strategies to maintain RN staffing levels on inpatient units. Delays in gaining approval to fill RN vacancies: One factor VAMC nursing officials identified as contributing to hiring delays is the period of time during which officials wait to get approval from a VA medical center resource board—an internal board that controls the medical center’s budget and the number of authorized staff positions—to fill an RN vacancy. Poor coordination between nursing and HR officials: VAMC nursing officials identified poor coordination between nursing and HR officials as another factor that contributed to delays in filling RN vacancies. HR officials are involved in handling application paperwork, interviewing RN applicants, scheduling screening activities such as physical examinations and background checks, and verifying employment references. VA’s Recruitment Process Redesign Workgroup recently issued several recommendations aimed at improving coordination in filling RN vacancies. Poor coordination can occur when nursing officials must wait for information from HR officials before a job offer can be made to an applicant. For example, VAMC nursing officials we interviewed stated that they may have to wait a few weeks for HR officials to determine an appropriate salary estimate based on an applicant’s educational qualifications and experience, a process that must take place before a job offer can be made to an RN applicant. In our survey, about 65 percent of nurse executives cited the inability to provide a salary estimate promptly to an applicant as one of the primary reasons they lost RN applicants to competing, non-VA hospitals. Poor coordination can also occur during the pre-employment process. According to nursing officials, RN applicants may make multiple visits to the medical center for pre-employment physicals and verification of state licenses because these activities have not been coordinated into one visit for the RN applicant. Shortage of experienced HR officials: VA headquarters and VAMC nursing officials identified the shortage of experienced and well-trained HR officials who process RN employment applications and hiring paperwork as a factor that contributes to RN hiring delays. VAMC nursing officials we interviewed reported that VAMCs have suffered a “brain drain” of experienced HR officials through retirement or attrition. VA noted in its recent workforce succession strategic planning report that it faces a challenge caused by a “lack of trained HR staff and expertise in the area of human resources.” VA’s ability to address delays in filling RN vacancies depends, in part, on its ability to retain experienced HR officials and to recruit and train new ones. According to VA, new HR recruits must acquire a good grasp of the breadth and complexity of HR knowledge and skills required by the federal government and VA. For example, VA noted that an effective HR official must possess specific knowledge of the complex laws, rules, and regulations for more than 300 VA occupations. VAMC nurse executives we surveyed reported that hiring freezes and lags delayed the initiation of the hiring process to fill RN vacancies. Forty-four percent of VA’s nurse executives we surveyed reported that they experienced a medical center hiring freeze between 2002 and 2007. On the average, nurse executives we surveyed reported experiencing two hiring freezes during this period, and 45 percent of nurse executives reported that the hiring freezes they experienced lasted on average from 7 to 12 months. Furthermore, 67 percent of nurse executives we surveyed reported that they experienced a hiring lag—that is, a temporary delay in hiring or a recurring process intended to control expenditures by limiting hiring to a certain number of new employees in a given pay period. About one-third of the nurse executives we surveyed indicated that these hiring freezes contributed to delays in the hiring process, and nearly half of nurse executives reported that a lag in hiring also contributed to delays that may dissuade potential applicants. Some nursing officials we interviewed told us that once the word has spread in the local community that the VAMC has imposed a hiring freeze, the medical center has difficulty recovering from the effects of the hiring freeze. In some cases, nursing officials reported that it took up to 2 years for RNs to reapply for VA employment because some applicants were not aware that VA’s hiring freeze had ended. VA has a number of efforts under way at both the national level and at individual VAMCs to reduce shortages in its healthcare workforce, including RNs. On a nationwide basis, VA has authorized its medical centers to implement several changes recommended by the Recruitment Process Redesign Workgroup that studied recruitment and hiring at VA. These recommendations may address some of the factors that contribute to delays in filling RN vacancies. According to VA officials, these changes are designed to increase flexibility and efficiency without weakening the process of screening candidates’ professional credentials. Collectively, VA’s recent changes consist of ways to (1) complete applicant interviews and physical examinations on the same day, (2) make a job offer to an RN applicant within 30 days, (3) allow use of electronic education transcripts in lieu of paper transcripts sent through the mail, and (4) create additional HR positions to help meet VA’s future needs as its experienced HR officials retire. In addition to implementing VA’s nationwide efforts, nursing officials at eight VAMCs we visited cited a number of steps that have been taken at individual VAMCs to increase efficiency and reduce hiring delays. These steps include: Improved communication and coordination between HR and nursing officials during the hiring process in various ways. For example, seven of the eight VAMCs we visited reported that they improved coordination and communication between nursing and HR officials involved in hiring RNs by better tracking an applicant’s paperwork and coordinating other pre- employment activities, including scheduling interviews and physical examinations on the same day when possible. In addition, two of the eight VAMCs have increased their interactions through regular meetings. Nursing officials at two VAMCs we visited told us that they have implemented a program called “On the Floor in 24,” an effort which allows the VAMC to bring an RN on board within 24 days by better coordinating and expediting steps in the hiring process. Hiring RNs under temporary appointments until screening activities such as physical examinations, drug tests, and background checks are completed. Nursing officials at five VAMCs we visited told us that they have hired some RNs on a temporary appointment status until screening activities are completed. Moreover, another VAMC we visited implemented a program called “On-Demand Hiring,” established by nursing and HR officials, which involves hiring an RN with the aid of a nurse recruiter. VAMC nursing officials reported that the nurse recruiter may then provide the applicant a salary range; afterwards, HR officials become involved by making a job offer to the RN applicant and proceed with screening activities such as arranging to have fingerprints taken for a background check, scheduling a physical examination, and drug tests. Seven VAMCs we visited implemented a computerized process to expedite the verification of RNs’ professional credentials. Hiring a nurse recruiter as a contact point between RN applicants and HR officials to handle application paperwork. Nursing officials at four VAMCs we visited reported that they have hired a nurse recruiter who will act as the focal point for coordinating with HR in posting RN vacancies and various steps in the RN hiring process. Implementing new procedures to make the hiring process more efficient. These procedures include delegating authority to sign nursing personnel actions, creating an application tracking database, and delegating authority from HR to nursing officials to give provisional salary quotes to applicants. Nursing officials at one VAMC we visited told us that before these recent changes to the hiring process, they usually had to wait for HR to provide an estimated starting salary to interested applicants who may have other job offers to consider. Table 2 summarizes the actions taken by VAMCs we visited to reduce delays in filling RN vacancies. While VA’s national and local efforts to reduce hiring delays are commendable, most are relatively recent, and it is too early to determine the extent to which these efforts will reduce RN hiring delays or whether they are sustainable. To better ensure that RN staffing levels on inpatient units in its VAMCs are adequate to provide quality care to its patients, VA must be vigilant on two fronts. First, it is important that the department expeditiously proceed with planning and implementing a nurse staffing system that accurately reflects patient needs and RN workload requirements. The inaccuracy of VA’s current PCS limits its usefulness in helping to establish adequate RN staffing levels and reveals a larger problem—VA does not have a viable system to accurately determine the RN staffing needs for inpatient units. VA recognizes the need to develop a new standardized nurse staffing system capable of accurate staffing estimates. Without an accurate nurse staffing system, VAMCs and nursing officials do not have good information as a basis for making sound judgments about the RN staffing needs of inpatient units and often must use supplemental RN staffing strategies to match RN staffing levels with patient care needs. Excessive use of supplemental RN staffing strategies can in turn adversely impact RN morale and job satisfaction and may lead to RN retention problems. Second, as the number of VA RNs who may consider retirement increases and the nation continues to face an RN shortage, it is important that VA maximize its ability to hire new RNs, as well as to retain RNs that it currently employs. To help address RN retention at VAMCs, VA can use OPM-authorized flexible work schedules and congressionally authorized alternate work schedules; however, VAMC nursing officials reported limited availability and use of alternate and flexible schedules. VA’s ability to maintain its RN workforce could be enhanced if it can expeditiously hire qualified applicants and offer more flexible and alternate work schedules for RNs. Further complicating VAMC RN staffing problems is that RNs often perform non-nursing tasks that lead to job dissatisfaction. Where RNs must perform non-nursing tasks because ancillary and nursing support staff are not available, it is important that nurse managers have the ability to adjust unit RN staffing levels so that nurses have adequate time to perform these duties. Developing a staffing approach that can accurately determine adequate RN staffing levels may also support VA’s ability to help RNs balance their work and personal commitments through the offering of alternate work schedules to RNs. To improve the ability of VAMCs to determine RN staffing levels needed for inpatient units and to recruit and retain inpatient RNs, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to implement the following three recommendations develop a detailed action plan that includes a timetable for building, testing, and implementing the new nurse staffing system; ensure that the new nurse staffing system provides RN staffing estimates that accurately account for both the actual inpatient acuity levels and current nursing tasks performed on inpatient units and adequately take into account the level of ancillary and nursing support that is available on VAMC inpatient units; and assess the barriers to wider availability of alternate and flexible work schedules for RNs at VAMCs and explore ways to overcome these barriers. In commenting on a draft of this report, VA concurred with our findings and recommendations and provided a description of actions that it plans to take to address our recommendations. Regarding our first recommendation—that VA develop a detailed action plan for building, testing, and implementing a new nurse staffing system— VA stated that it has long recognized the need for an automated and data- driven nurse staffing methodology and noted some of the challenges in developing and implementing such a system. VA reported that the pilot implementation of the proposed nurse staffing system is expected to be completed next year and plans to implement the new system on all inpatient units by 2012. VA provided a copy of a three-phase plan for the creation, testing, and implementation of a new staffing methodology for nursing personnel on inpatient units. While the first phase of VA’s plan appears to be in-process and the second phase addresses staffing in areas other than inpatient units, the third phase of VA’s plan—which includes the development of an automated scheduler application and a patient acuity application—is pending approval by VA’s Office of Information. This third phase in VA’s plan is critical to developing an automated and data-driven nurse staffing methodology, and we would encourage VA to approve this phase of the plan as soon as possible. To address our second recommendation—that VA ensure its new nurse staffing system accurately account for factors including available ancillary and nursing support—VA stated that its new nurse staffing system will include indicators for ancillary and nursing support and that nurse staffing projections will be determined based on nurse responsibilities. Concerning our third recommendation—that VA assess the barriers to wider availability of alternate and flexible work schedules and explore ways to overcome these barriers—VA stated that it plans to convene a task force to fully assess barriers to the effective use of alternate and flexible work schedules for RNs and to identify potential solutions for overcoming these barriers. The task force will present its findings by June 2009. VA’s written comments are reprinted in appendix IV. We are sending copies of this report to the Secretary of Veterans Affairs, appropriate congressional committees, and other interested parties. We will also provide copies to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix V. We examined the Department of Veterans Affairs’ (VA) inpatient registered nurses (RN) staffing practices at VA medical centers (VAMC) and the challenges VA faces in hiring and retaining RNs. Specifically, we identified (1) how useful the information generated by VA’s patient classification system (PCS) is for determining RN staffing levels on VA inpatient units, (2) key factors that nursing officials and RNs identify that adversely affect RN retention on inpatient units, and (3) factors that nursing officials identify as contributing to delays in hiring RNs to fill vacant positions. To examine how VAMCs determine the RN staffing levels needed for inpatient units, we conducted a Web-based survey of all VA nurse executives at VAMCs. A nurse executive is a member of the executive leadership team at a VAMC and is responsible for all nursing care delivered at the VAMC. The survey was sent to 140 VA nurse executives and obtained a 63 percent response rate, which allows us to generalize the results to all VA nurse executives at VAMCs. To field the survey, we contacted a VA headquarters official in the Office of Nursing Services (ONS) to obtain a list of VAMC nurse executives, and the official provided email addresses for the nurse executives. See appendix II for the results of our VA nurse executive survey. To gain further information on RN staffing, we interviewed VAMC nurse executives and VAMC inpatient unit nurse managers responsible for determining inpatient RN staffing levels at eight VAMCs we visited located in Denver, Colorado; Houston, Texas; Minneapolis, Minnesota; New York, New York; Portland, Oregon; Seattle, Washington; Tampa, Florida; and Togus, Maine. We selected these VAMCs because of their geographic variation and to capture various types of inpatient units including medical intensive care, critical care, surgical intensive care, surgery, medicine, behavioral health, nursing home, and spinal cord injury. The findings from these eight VAMCs we visited cannot be generalized to all VAMCs. To assess VA’s PCS and the information reported by nursing officials we reviewed the literature to identify relevant best practices in nurse staffing and the design of systems used to classify patients and interviewed representatives from state hospital associations in states where we conducted visits to VAMCs about the use of staffing methodologies and supplemental staffing strategies. To identify the factors that nursing officials and RNs identify as adversely affecting RN retention, we obtained April 2008 VA data on the number of VA RNs who use alternate work schedules. In addition, we interviewed nurse managers responsible for supervising RNs on inpatient units at the eight VAMCs we visited and conducted focus groups with inpatient RNs to get their perspectives on retention issues. The inpatient RNs in our focus groups typically deliver care at the patient bedside on inpatient units. The 219 RNs who participated in our focus groups were from three shifts (day, evening, and night) at the eight VAMCs we visited. Attendees at our focus groups included RNs of different ages, nurse experience levels, and lengths of tenure at VA. (See table 3 for a demographic profile of VA RNs who attended our focus groups.) During each focus group session, we provided RNs an opportunity to offer their opinions on a variety of issues related to their experience working at VA. For each focus group we utilized a series of structured questions to gain RNs’ opinions on nurse staffing, recruitment, and retention issues. A summary of the responses from our focus groups are provided in appendix III. We also interviewed representatives from state hospital associations in states where we conducted visits to VAMCs about the local retention challenges affecting RNs. To identify the factors that contribute to delays in hiring RNs to fill vacant positions, we used results from our Web-based survey of nurse executives and interviewed VA headquarters officials, human resources (HR) officials, and nurse managers who recruit RNs at the eight VAMCs we visited. In addition, we reviewed VA policies, guidance, and reports related to RN staffing, retention and hiring issues and obtained, from VA headquarters officials, work schedule data on VA RNs who use alternate work schedules contained in VA’s Personnel Accounting Integrated Data (PAID) System which houses VA’s payroll and human resources information. We also interviewed representatives from state hospital associations in states where we visited VAMCs about RN recruitment challenges. We assessed the reliability of the data obtained from our Web-based survey of VA nurse executives and from VA headquarters officials. We performed a systematic review of the completed surveys by checking each survey for problems such as key questions left unanswered, patterns of skipped questions, unclear written responses, and out-of-scope entries. The information presented in our focus group summaries accurately capture the opinions provided by inpatient RNs who attended the focus groups at the eight VAMCs we visited. However these opinions cannot be generalized to all inpatient RNs at the eight VAMCs we visited, or to all inpatient RNs at VAMCs. We contacted VA headquarters officials responsible for VA’s PAID system to gain an understanding of the completeness and accuracy of the data and whether quality checks were performed on these data. Based on this assessment we determined that these data were adequate for our purposes. We conducted this performance audit from May 2006 through September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Selected results from our survey and a summary of responses from our focus groups are provided in appendix II and appendix III. To obtain the views of VA nurse executives on various staffing, recruitment, and retention issues, we conducted a Web-based survey of VA nurse executives employed at VAMCs. The survey contained questions on topics such as nurse executives’ views on the use of supplemental staffing strategies at the VA medical center where the executives work, RN vacancies, recruitment and retention challenges, and the use of hiring freezes or lags in hiring. Some of these questions are listed below. Not all column totals add to 100 percent because of rounding, multiple answers to some questions that ask respondents to check all that apply, or no response checked by VA nurse executives for some questions. Q1: As of March 31, 2007, which of the following services or departments at this facility employed support staff for all shifts (day, evening, and night)? Checked (percentage) Q2: What were the effects on inpatient units of having bedside non- management RN vacancies? (percentage) 1. RN overtime increased 2. RN floated to the unit with vacancies 3. Staff on units worked with fewer staff (short staffed) 4. Number of patient beds were capped 5. Used contract/agency RNs 6. Used fee-basis RNs 7. Patients were diverted to non-VA facilities 34 (percentage) 8. RN turnover on the unit 9. Q3: What strategies did this facility use to supplement bedside non- management RN staffing? (percentage) Q4: Based on your experience recruiting for bedside non-management RNs, what has been the average length of time it has taken to fill a position from the time you were authorized to recruit and fill the vacancy by an approved form SF 52 to the time an RN comes on board? Checked (percentage) Q5: What steps did this facility take in the last 5 years to simplify or shorten the hiring process for new RNs? (percentage) 1. Improved communication to keep applicant informed about steps and paperwork required in hiring process 2. Brought in RNs under temporary appointments 3. Used VA’s expedited VetPro credentialing 4. Hired a nurse recruiter 5. Q6: Why did this facility hire bedside non-management RNs on temporary appointments? (percentage) Q7: Did this facility experience a Veterans Integrated Service Network (VISN)-imposed or facility-imposed hiring freeze that affected its ability to hire bedside non-management RNs in the last 5 years? (During a hiring freeze units typically require authority from a resource board or other similar entity within the facility or VISN to fill a vacant RN position.) (percentage) Q8: What was the average length of the VISN or facility imposed hiring freezes that affected your ability to hire bedside non-management RNs? Checked (percentage) Q9: Did this facility experience a lag in hiring that affected its ability to hire bedside non-management RNs in the last 5 years? (A lag in hiring is a temporary delay in hiring RNs or can be a recurring process that delays hiring, i.e., only allowing a certain number of RNs to be brought on board per pay period or requiring vacant positions to be approved by an entity such as a resource board.) Checked (percentage) Q10: What type of restrictions imposed during a hiring freeze or lag affected your ability to fill bedside non-management RN positions? VISN freeze (percentage) center freeze (percentage) Hiring lag (percentage) 1. Recruitment for vacant RN positions needed approval by the equivalent of a resource board 26 (16) 51(31) 62 (38) 2. Recruitment for each RN position needed approval by the equivalent of a resource board 31 (19) 49 (30) 61(37) 3. Hiring for vacant RN positions deferred for a period of time 25 (15) 38 (23) 46 (28) 4. Overtime for RNs increased 21 (13) 38 (23) 59 (36) 5. Limits placed on recruitment for a 18 (11) 33 (20) 36 (22) 6. Limited number of RN positions filled 15 (9) 31(19) 38 (23) 7. Did not recruit for certain RN 15 (9) 28 (17) 33 (20) 8. Limits were placed on number of RNs that could be hired in a pay period 11 (7) 23 (14) 31(19) 9. Use of contract/agency RNs 10 (6) 15 (9) 34 (21) 10. RNs hired under temporary 5 (3) 13 (8) 13 (8) 11. No vacant RN positions were filled 8 (5) 7 (4) 7 (4) 2 (1) 3 (2) 8 (5) Q11: How difficult has it been to compete with local health care establishments in recruiting bedside non-management RNs? Checked (percentage) Q12: What are the primary reasons for the difficulty in competing with local health care establishments in recruiting bedside non-management RNs? New Graduates (percentage) (percentage) 1. Hiring process was lengthy 76 (57) 89 (67) 2. RNs were required to rotate shifts 65 (49) 68 (51) 3. Unable to make position offers promptly 65 (49) 71 (53) 4. Shortage of RNs in the local market area 48 (36) 67 (50) 5. Salary level low compared to locality pay area 48 (36) 63 (47) 6. Alternate or flexible work schedule was not offered on 51 (38) 57 (43) 7. Tuition reimbursement not available at time of 36 (27) 32 (24) 8. Recruitment incentives were not sufficient 29 (22) 36 (27) 9. Difficult to recover from hiring freeze 28 (21) 35 (26) 10. Reimbursement for continuing education not sufficient 25 (19) 28 (21) 11. Determining salary was delayed by professional 21 (16) 24 (18) 12. Recruitment incentives were not available 27 (20) 25 (19) 13. Tuition reimbursement not sufficient 19 (14) 17 (13) 14. Reimbursement for continuing education not available 16 (12) 17 (13) 15. VA benefit package was not as attractive as local 13 (10) 13 (10) 13 (10) 15 (11) Q13: How difficult has it been to compete with local health care establishments in retaining bedside non-management RNs? Checked (percentage) Q14: What are the primary reasons for the difficulty in competing with local health care establishments in retaining bedside non-management RNs? (percentage) Question 15: As of March 31, 2007, which of the following staffing methodologies were used to determine staffing levels for inpatient care for inpatient units at this facility? (percentage) The table lists the questions we asked RNs at the eight VAMCs we visited and their five most frequent responses. What attracted you to work at VA? What keeps you working at VA? What is not at your facility now that would keep you working here? What contributes to the RN shortage? Lack of support for new hires Lack of support staff (i.e., RNs performing non- nursing tasks) How would you improve recruitment and hiring? Shorten VA’s hiring process Better advertisement for RN positions Increase outreach (i.e., to nursing schools) How is staffing determined? Available full-time equivalent employees (FTEE) What types of supplemental staffing strategies are used? Randall B. Williamson at (202) 512-7114 or [email protected]. In addition to the contact named above, Marcia A. Mann, Assistant Director; N. Rotimi Adebonojo; Mary Ann Curran; Linda Diggs; Martha A. Fisher; Krister Friday; Susannah Bloch; and Suzanne Worth made major contributions to this report. VA Health Care: Recruitment and Retention Challenges and Efforts to Make Salaries Competitive for Nurse Anesthetists. GAO-08-647T. Washington, D.C.: April 9, 2008. VA Health Care: Many Medical Facilities Have Challenges in Recruiting and Retaining Nurse Anesthetists. GAO-08-56. Washington, D.C.: December 13, 2007. Nursing Workforce: HHS Needs Methodology to Identify Facilities with a Critical Shortage of Nurses. GAO-07-492R. Washington, D.C.: April 30, 2007. Nursing Workforce: Emerging Nurse Shortages Due to Multiple Factors. GAO-01-944. Washington, D.C.: July 10, 2001.
Registered nurses (RNs) are the largest group of health care providers employed by VA's health care system. RNs are relied on to deliver inpatient care, but VA medical centers (VAMC) face RN recruitment and retention challenges. VAMCs use a patient classification system (PCS) to determine RN staffing on inpatient units by classifying inpatients according to severity of illness to determine the amount of RN care needed. GAO reviewed VAMC inpatient units for (1) the usefulness of information generated by VA's PCS; (2) key factors that affect RN retention; and (3) factors that contribute to delays in hiring RNs. GAO performed a Web-based survey of all VAMC nurse executives; interviewed VA headquarters officials and VAMC nursing officials, and conducted RN focus groups at eight VAMCs visited by GAO. The findings of GAO's survey are generalizable to all nurse executives; however, findings from the focus groups at the eight VAMCs are not generalizable. VAMC nursing officials--nurse executives who are responsible for all nursing care at VAMCs and nurse managers who are responsible for supervising RNs on VAMC inpatient units--GAO interviewed reported that although VA inpatient RNs are required to input patient data into VA's PCS, they do not rely on the information generated by PCS because it is outdated and inaccurate. These nursing officials noted that VA's PCS does not accurately capture the severity of patients' illnesses or account for all the nursing tasks currently performed on inpatient units. Because of the shortcomings of VA's PCS, nurse managers use data from a variety of sources to help set RN staffing levels for their inpatient units. At four of the eight VAMCs GAO visited, nurse managers told GAO that they set RN staffing levels for their inpatient units by adhering to the historical staffing levels that had been established for the units. Three VAMCs GAO visited set their RN staffing levels using data on the RN staffing levels found in inpatient units in other hospitals with similar characteristics. VA reported it is proposing to develop a new RN staffing system. However, VA has not developed a detailed action plan that includes a timetable for building, testing, and implementing the new nurse staffing system. VA nursing officials reported that VA's ability to retain its RNs is adversely affected by two main factors. First, inpatient RNs reported that they spend too much time performing non-nursing duties such as housekeeping and clerical tasks. Second, even though VAMCs were authorized in 2004 to offer RNs two alternate work schedules that are generally desired by nurses--such as working three12-hour shifts within a week that would be considered full-time for pay and benefits purposes--few nurse executives reported offering these schedules; therefore, few RNs work these schedules. Specifically, according to nurse executives GAO surveyed only about 1 percent of many inpatient units offered alternate schedules and less than 1 percent of RNs actually worked these schedules. The availability of flexible work schedules, for example, working eight 10-hour shifts over a 2-week period, are more widely available among VAMCs but are still limited, according to GAO's survey of nurse executives. Nursing officials and RNs noted other factors affecting retention such as reliance on supplemental staffing strategies--for example, RN overtime--and insufficient professional development opportunities. Both VA nurse executives and nursing officials identified limitations in VA's process for hiring RNs and VA-imposed hiring freezes and lags as major contributing factors causing delays in hiring RNs to fill inpatient vacancies at VAMCs. VA nursing officials reported that hiring freezes and lags at VAMCs and delays resulting from limitations in VA's hiring process can discourage prospective candidates from seeking or following through on applications for employment at these facilities. Although VA has recently taken steps to address some of the factors that are reported to contribute to RN hiring delays, it is too early to determine the extent to which these steps have been effective in reducing hiring delays
For federal tax purposes, non-U.S. citizens are categorized as either resident or nonresident aliens and are subject to different tax and filing requirements. Generally, a nonresident alien is an individual who (1) does not possess a permanent resident card, known as a green card, or (2) has not established a substantial presence in the U.S., which is generally determined by the number of days an individual spends in the U.S. over a 3-year period, though other considerations apply. Resident aliens are generally subject to the same federal tax requirements as U.S. citizens, which include paying U.S. taxes on worldwide income and filing an individual tax return (Form 1040). On the other hand, nonresident aliens generally pay U.S. taxes only on income derived from U.S. sources and may be required to report income on the nonresident alien individual tax return (Form 1040NR). Nonresident aliens cannot take some credits and deductions available to residents and citizens. However, nonresidents may qualify for reduced tax rates or exemptions as a result of tax treaties between the U.S. and their countries of residence. Generally, the tax rate nonresident aliens are to pay varies by both the types of income earned and the individuals’ countries of residence. Nonresident aliens earning income effectively connected to a U.S. trade or business, such as employee wages, are generally taxed at the same graduated rates as U.S. citizens and residents, though some tax treaties offer certain exemptions on this type of income. The U.S.’s tax treaty with China, for example, exempts from taxation certain income earned from the performance of personal services if the nonresident alien is in the country for no more than 183 days. Income not effectively connected to U.S. trade or business, such as certain types of investment income, is generally taxed at 30 percent. However, nonresidents with income such as interest payments on deposits with a U.S. bank, or who are covered by a tax treaty may qualify for income exemptions or lower tax treaty rates. For example, residents of Mexico earning dividends from U.S. companies may qualify for a 10 percent or lower tax rate on this income instead of the flat 30 percent rate. Nonresidents with income effectively connected to a U.S. trade or business are generally required to file Form 1040NR even if they owe no taxes because of a tax treaty or deductions. Conversely, nonresident aliens not engaged in a U.S. trade or business and whose tax liability was satisfied by the withholding of tax at the source do not have to file. A filing exemption also holds for nonresidents meeting certain other criteria, such as the following. Nonresidents whose only U.S.-source income is wages in an amount less than the personal exemption amount—$3,650 for tax year 2009— do not have to file if they have no other need to file, such as to claim tax treaty benefits or a refund. Income of $3,000 or less paid by foreign employers for personal services performed in the U.S. is not considered to be from U.S. sources for nonresidents in the country for 90 days or less. An individual with only this type of U.S.-source income would not need to file a return. This $3,000 threshold has not changed since its inception in 1936 and would equate to over $46,000 in 2009 dollars if adjusted for inflation. Table 1 lists examples of nonresident aliens earning U.S.-source income and the potential tax treatment in each scenario. As with U.S. citizens and residents, nonresidents must have a taxpayer identification number in order to file a tax return. Foreign individuals authorized to work in the U.S., such as individuals traveling on a nonimmigrant temporary worker visa, must apply for a Social Security number (SSN). Individuals who do not qualify for a SSN but have a valid filing requirement under the Internal Revenue Code may apply to IRS for an individual tax identification number (ITIN). For example, certain short- term foreign business visitors earning wages from foreign employers while in the U.S. and foreign investors would generally apply for an ITIN. Tax law also requires that both resident and nonresident aliens obtain a certificate of compliance, known as a sailing permit, to ensure that their outstanding U.S. tax obligations have been satisfied prior to departing the country. First enacted in 1921, the requirement stipulates that most aliens permitted to work in the U.S. must visit an IRS office 2 weeks to 30 days prior to departing the country, provide documentation to support any claims of taxable income and prior tax payments made, and complete either a Form 1040-C (U.S. Departing Alien Income Tax Return) or Form 2063 (U.S. Departing Alien Income Tax Statement). An alien is to file Form 1040-C to report all income received or expected to be received during the tax year and generally is to pay any outstanding U.S. tax liability at the time the form is filed. Form 2063 is to be filed when the departing alien has no taxable income for the tax year or when tax collection will not be hindered by the alien’s departure from the country. Certain frequent travelers between the U.S. and Mexico or Canada, alien students and exchange visitors, and visitors for business admitted on a class B-1 or B-1/B-2 visa with no taxable income and in the country for no more than 90 days are generally exempted from the sailing permit requirement. Finally, entities making income payments to nonresidents are required to withhold taxes at either graduated or fixed rates, depending on the type of income earned, except when the payee can verify the individuals are entitled to an exemption. For example, a nonresident alien earning wages from a U.S. employer would generally be subject to graduated withholding in a manner similar to that of U.S. citizens and residents. On the other hand, a financial institution disbursing U.S.-source investment income to a foreign-based individual would generally withhold at a fixed 30 percent rate, unless the entity could verify that the nonresident was entitled to a reduced treaty rate. In both of these examples, the employer and financial institution are required to report income payments and withholding to IRS on information returns, such as Form W-2 (Wage and Tax Statement) or Form 1042-S (Foreign Person’s U.S.-source Income Subject to Withholding). In certain circumstances with nonresident alien athletes and entertainers, IRS enters into arrangements that set withholding rates for income earned from specific events, often at less than the 30 percent otherwise required. These arrangements, called Central Withholding Agreements, specify the amount and timing of U.S. tax payments and take into account expenses associated with the income earnings. According to IRS data, nonresident alien individuals filed about 634,000 Forms 1040NR for tax year 2007, a small number compared to the 143 million Forms 1040 other individual taxpayers filed for that year. These nonresident filers reported $12.8 billion in income, resulting in a $2.5 billion tax liability. The number of Form 1040NR filers varied little from 2003 to 2007, the latest years for which data were available. However, total income and total tax liability reported increased during this period, as shown in table 2. Total income and tax liability reported on Form 1040NR increased by 64 percent and 71 percent, respectively, compared to increases in reported income (40 percent) and tax liability (48 percent) reported on Form 1040 from tax year 2003 to tax year 2007. The $5 billion increase in total income reported on Forms 1040NR for this period is largely due to increases among higher earners, since the total income that nonresidents with $100,000 or more in income reported on Form 1040NR increased from $3.8 billion to $8.1 billion (111 percent). Form 1040NR filing data do not represent the full population of nonresident alien taxpayers, however. Certain foreign investors earning U.S.-source investment income with sufficient taxes withheld at the source, for example, are not required to file Form 1040NR. Also, nonresidents married to U.S. citizens or residents can choose to be treated as residents and jointly file Form 1040 with their spouses. Other nonresident aliens may incorrectly file Form 1040, meaning their tax return information is not reflected in the Form 1040NR data. IRS data also allow for comparison of nonresident alien filing characteristics to those of U.S. citizen and resident filers, as shown in table 3. As shown in table 3, 53 percent of Form 1040NR filers reported no tax liability for tax year 2007, in contrast to an estimated 25 percent of Form 1040 filers. Some nonresidents qualify for tax treaty income exemptions which may contribute to the higher proportion of Form 1040NR filers with no tax liability. Requiring a nonresident with no tax liability to file a U.S. return creates some burden on the taxpayer, yet there are reasons why it may be beneficial. For example, for individuals filing exclusively to claim a treaty exemption, IRS may use that information to review and potentially dispute claims. Additionally, some nonresidents may not know if they have a tax liability until they go through the process of preparing a tax return. Also as shown in table 3, a smaller percentage of Forms 1040NR than Forms 1040 reported a tax balance or refund due for tax year 2007. These differences could be due to various factors, such as some nonresidents having no tax liability as a result of tax treaties. Also, a greater proportion of Forms 1040NR than Forms 1040 were prepared by a paid tax return preparer, a disparity which may be due to several factors, such as the complexity of nonresident tax law and that some employers with employees traveling internationally may hire tax professionals to assist in preparing employees’ returns. Figure 1 below shows that a small proportion of filers accounted for the majority of reported tax liability. For example, about 20,000 filers (3 percent of all Form 1040NR filers) reported over $100,000 in total income, yet this population contributed 76 percent ($1.9 billion of $2.5 billion) of reported tax liability reported for tax year 2007. Conversely, 72 percent of nonresidents reported $10,000 or less in income, with these returns accounting for 1 percent of all reported tax liability ($29 million out of $2.5 billion). One reason why most nonresidents reported low income amounts might be that some are in the country for only part of the tax year. IRS has not developed estimates for three types of nonresident alien tax noncompliance: (1) failing to file a tax return, known as nonfiling, (2) underreporting income on filed returns, and (3) filing Form 1040 instead of Form 1040NR. IRS has developed an estimate of overall individual taxpayer nonfiling by comparing general population information from the U.S. Census Bureau’s Current Population Survey to individual income tax filing data, and matching data taxpayers report on tax returns to that which third parties report on information returns, such as wage and tax statements from employers. However, according to IRS research officials, it is not possible for IRS to parse out the nonresident portion of its nonfiling estimate because the agency lacks the information necessary to distinguish between nonresident alien and other nonfilers. Also, census data exclude many short-term nonresident visitors. IRS has excluded Form 1040NR returns from its studies of individual taxpayer underreporting, which it uses to estimate the tax gap. Those studies rely partially on face-to-face examinations with individual taxpayers. Sampling Form 1040NR filers in these studies would have been costly and difficult since many nonresident aliens would have departed the country by the time IRS examined the returns, according to IRS research officials. Given limited agency resources, IRS has focused its compliance measurement efforts on types of taxpayers that may represent greater compliance risks. Total lost tax revenues associated with nonresident noncompliance, for example, may be modest when compared with underreporting for other areas, such as individual income tax, employment taxes, or entities such as S corporations. Additionally, IRS has not estimated the extent to which nonresidents improperly file Form 1040 instead of Form 1040NR. This is partly because sampling and examining Form 1040 filers to identify nonresidentswould be time-consuming and costly, given the large number of Form 1040filers and the likelihood that nonresidents will have already departed country. Generating a rough estimate of the number of nonresident aliens who may have a filing requirement using data from other federal agencies would be challenging. The Department of Homeland Security (DHS) reported admitting 9.7 million visitors for purposes other than pleasure to the U.S. in 2007, while the Department of State reported issuing 6.4 million nonimmigrant visas in the same year. Yet neither figure serves as a reliable proxy for the number of nonresident aliens entering the country for employment or business purposes, much less incurring a filing obligation. DHS’s data reflect the number of entries into the U.S. rather than the number of individuals, thus overcounting individuals making multiple trips to the U.S. State data reflect the number of visas issued, but some were issued for strictly leisure purposes, some visa recipients never enter the U.S., and others may enter the U.S. and stay for a period of time sufficient to establish tax residency. Even with an estimate of the number of nonresident aliens entering the U.S. each year, it would be difficult to further determine the number incurring a tax liability. Some individuals may not earn sufficient income to prompt a filing requirement and others may be noncompliant with the filing requirement but not owe U.S. taxes because of tax treaty benefits. IRS’s outreach efforts have focused on presenting information on nonresident tax issues to a variety of audiences. In 2009, IRS began conducting seminars and workshops for tax practitioners on nonresident alien tax issues and Form 1040NR at its Nationwide Tax Forums. IRS also conducted two phone forums in 2008 on federal tax withholding, for nonresident alien athletes and entertainers, and Central Withholding Agreements. IRS has also presented annually to groups such as the American Payroll Association and National Association of College and University Business Officers and has presented periodically to the American Bar Association, Tax Executives Institute, and local attorney and certified public accountant groups. Regarding nonresident aliens, these presentations covered a wide array of topics, including tax residency rules, income sourcing rules, tax treaty issues, descriptions of which forms to file, and guidance on withholding on payments to foreign individuals. Additionally, IRS employees at foreign posts are available to provide guidance to nonresidents, although these posts generally are staffed by few employees, making outreach difficult. IRS has held preliminary discussions with the Department of State and the U.S. Citizenship and Immigration Service about having links to information on IRS’s Web site on nonresident alien tax requirements included on sections of those agencies’ Web sites that cover visa applications and requirements. IRS and the Department of State have discussed incorporating tax information within visa application materials. However, according to an IRS official involved with this effort, State was not inclined to produce this material because of the cost involved and because the agency did not want to be perceived as providing guidance on tax matters. According to IRS compliance officials, IRS does not engage in outreach to tax software providers on nonresident alien tax issues, primarily because Form 1040NR currently cannot be filed electronically, as discussed later. Software providers could conceivably insert a question in their Form 1040 preparation programs inquiring if the user is a citizen, resident, or nonresident alien. According to IRS officials, IRS is assessing the feasibility and cost- effectiveness of setting up a toll-free number that individuals can call from outside of the U.S. to receive tax assistance. Currently IRS tax assistance toll-free numbers cannot be called from outside of the U.S. IRS also produces various publications containing information relevant to nonresident aliens and includes information on nonresident alien tax issues on its Web site. According to representatives from groups that work with employers and nonresidents to assist them in fulfilling their tax obligations, nonresident aliens face challenges in fulfilling their tax filing obligations. For example and despite IRS’s outreach and education efforts, some nonresidents and their employers may not be aware of the nonresident alien tax rules. Although nonresidents earning wages from U.S. employers would likely know that they had taxes withheld from their wages, they may not know they also have to file a tax return or which return to file. Likewise, foreign individuals in the U.S. for short-term business trips may be unaware that they have a filing requirement given that comparable requirements may not exist in their countries of residence. For example, in Canada, nonresidents generally do not have to file a tax return if they owe no Canadian tax. Also, some paid tax return preparers may not be familiar with nonresident alien tax rules. Representatives from groups we spoke with thought that unlicensed preparers in particular might not be familiar with the nonresident alien tax rules. Likewise, aspects of nonresident alien taxation, such as tax residency rules, determining whether income is effectively connected to a U.S. trade or business or is U.S.- or foreign-source, and applying tax treaty provisions, can be difficult for nonresidents to understand. For example, it can be challenging to answer the basic question of whether or not a foreign person is a nonresident or resident alien. Beyond the green card and substantial presence tests, noncitizen taxpayers or their practitioners need to consider various scenarios in making residency determinations. For example, individuals who would otherwise be treated as residents can file as nonresidents if they have a closer connection to a foreign country. It is also possible to be both a nonresident alien and resident alien in the same tax year and different rules apply for the part of the year an individual is a nonresident alien and the part of the year the individual is a resident alien. Although no single rule may be difficult to apply, that numerous rules need to be considered can make the residency determination a difficult and time consuming one, according to representatives from groups that work with employers and nonresidents to assist them in fulfilling their tax obligations. The inability for nonresidents to file Form 1040NR electronically is another challenge the groups we interviewed mentioned. Currently, IRS does not allow for electronic filing of Form 1040NR because it contains fields that cannot easily be transcribed into an electronic format. IRS redesigned Form 1040NR for tax year 2009, in part to address this problem. However, it does not plan to accommodate electronic filing of the form until at least 2014. Another set of challenges that groups we interviewed identified concerned obtaining ITINs, as discussed below. ITIN applicants need to submit large amounts of documentation to IRS, some of which must be certified by going to a U.S. embassy, which can be time-consuming. Some applicants need to prove that they cannot obtain a SSN before they can be assigned an ITIN and some nonresidents apply for a SSN just to get the rejection letter so they can then apply for an ITIN. Some nonresidents unable to obtain an ITIN prior to departing the country may end up not filing a return, even if owed a refund. Whether or not they persist in the process to obtain an ITIN may depend on whether or not the individuals anticipate subsequent U.S. taxable activities. Finally, some groups noted it is a burden for nonresidents paid by foreign employers who take short business trips to the U.S. to file the required Form 1040NR. As previously discussed, personal service income of $3,000 or less paid to a nonresident by a foreign employer is not considered to be from U.S. sources for individuals in the U.S. 90 days or less who have no other compensation for services within the U.S. Congress established this threshold in 1936 to permit foreign residents to visit the U.S. for business purposes without being subject to taxes on thecompensation they earn while in the U.S. In discussing this exemption, th Senate noted that the lack of a threshold had created ill will disproportionate to the small amount of revenue raised by taxing fo residents making short bus iness trips to the U.S. Because the $3,000 threshold has not increased since 1936, it is likely that a greater proportion of nonresident aliens have a filing requirement today than when the threshold was established. For example, in 1936, $3,000 was 559 percent of the U.S. per capita personal income amount of $537. In 2008, $3,000 represented 8 percent of the U.S. per capita personal income amount of $39,751. Likewise, a nonresident would need to earn an annual salary of $12,133 to exceed the $3,000 threshold during a 90-day period, assuming the individual had no other U.S.-source income. A salary of $12,133 in 1936 is equivalent to $187,938 in 2008 dollars. A nonresident earning $187,938 in 2008 would need to be in the U.S. for only 5 days for business purposes to trigger a filing requirement, if the individual earned no other U.S.-source income. This increased reach of the filing requirement is underscored by the advance of economic globalization and increase in business travel since the threshold was established. Some groups we spoke with suggested raising the $3,000 threshold to reduce the burden of filing tax returns on nonresidents who make short business trips to the U.S. and are paid by foreign employers. In evaluating whether to increase the threshold, either by the level of inflation since 1936 or another amount, various issues warrant consideration. For example, although the current filing requirement may be applicable to a broader population of nonresident aliens than in 1936, many nonresidents who are required to file may ultimately owe reduced or no taxes because of the tax treaties the U.S. has adopted. According to DHS data, at least 78 percent of admissions to the U.S. in fiscal year 2007 were of individuals residing in countries with which the U.S. has tax treaties. Also, raising the exemption amount could negatively affect U.S. residents if they do not receive reciprocal exemptions on income otherwise subject to tax in countries with which the U.S. has tax treaties. Raising the threshold amount could result in lost tax revenue. For example, IRS calculated that Form 1040NR filers who had income from personal services in an amount of $40,000 or less reported $222.1 million in tax liability for tax year 2007. This amount represented about 9 percent of the total tax liability reported on Form 1040NR for that year. If the threshold had been set at $40,000 for tax year 2007, which is slightly less than the value of $3,000 in 1936 dollars inflated to 2007 dollars, the $222.1 million could have been exempt and not paid. However, it is not likely that all of that amount would have been exempt because some of the nonresidents with personal services income of $40,000 or less could have been paid by a U.S. employer or could have been in the U.S. for more than 90 days, and therefore would not have been entitled to the exemption. Also, some of that tax amount may be attributed to other types of income. Although increasing the exemption threshold would likely result in reduced tax revenue, it would also likely result in reduced burden and cost savings for some nonresidents and IRS. Some taxpayers would no longer bear the burden or cost of obtaining an ITIN and filing a return. IRS would likely realize cost savings from having to process fewer ITIN applications and Forms 1040NR. IRS has expanded its enforcement efforts over the past decade. In 2001, IRS had two examiners who covered nonresident alien compliance issues. Currently, IRS’s Large and Mid-sized Business division’s International Compliance Strategy and Policy group (LMSB International) has 261 examiners dedicated to international compliance issues, including nonresident alien tax compliance. LMSB International plans to hire an additional 202 examiners during fiscal year 2010. LMSB International has generally conducted face-to-face examinations of nonresident aliens through special projects that focus on particular types of taxpayers. For example, LMSB International has examined individuals employed by foreign embassies or consulates and international organizations in the U.S. Although U.S.-source income paid to nonresident employees of foreign governments and international organizations may be exempt from federal income tax, the exemption depends on the tax treaty or consular convention between the U.S. and the relevant foreign governments or other U.S. tax laws. Also, employees of foreign governments and international organizations are generally considered nonresidents regardless of how long they are in the U.S. LMSB International found that some individuals were claiming income exemptions to which they were not entitled or filed Form 1040 instead of Form 1040NR. For this project, IRS first contacted potentially noncompliant individuals and allowed them to voluntarily correct any noncompliance. IRS assessed $32.0 million in taxes for 4,540 taxpayers who voluntarily settled with IRS from fiscal year 2007 through the end of January 2010, for an average of $7,049 per settlement. IRS then examined 3,720 taxpayers who did not voluntarily settle with IRS, assessing $21.8 million in taxes, for an average of $5,851 per examination. LMSB International is continuing these examinations. Building face-to-face examination cases for nonresidents is resource intensive. For example, preparing for and conducting the examinations of employees of foreign embassies and consulates and international organizations took up nearly all of LMSB International’s resources that were dedicated to nonresident alien enforcement. LMSB International used State Department visa information to identify the nonresidents it contacted and examined. However, it is difficult and time consuming for IRS to use visa information to identify corresponding tax returns because visas do not include SSNs or ITINs, which are the unique identifiers included on tax returns that IRS uses to build examination cases. LMSB International is planning on using examiners it expects to hire in fiscal year 2010 to conduct additional enforcement actions against nonresidents that would be less time consuming and complex than face-to- face examinations. For example, IRS may examine potentially noncompliant nonresidents through correspondence, according to an LMSB International official. Likewise, through its Automated Underreporter program (AUR), IRS has begun to match information taxpayers report on Forms 1040NR to information third parties report to IRS to identify nonresident alien taxpayers who may have underreported their income. IRS previously concluded, through a test, that matching income items from Form 1040NR, such as wages, was not a prudent use of resources. IRS found that many of the tax returns it studied claimed tax treaty benefits, which can be time consuming to verify and can require expertise to evaluate that IRS AUR staff generally did not possess. However, given that LMSB International is planning to hire additional staff, it may be able to examine nonresident alien taxpayers whom it identifies as potentially noncompliant through AUR, according to the official. IRS has a broad program to identify taxpayers who failed to file a required tax return, including those who should have filed Form 1040NR. The program only identifies whether individuals may have failed to file a tax return and cannot easily identify which form they should have filed (i.e., Form 1040 versus Form 1040NR). IRS may be able to identify during an examination that a nonfiler should have filed Form 1040NR. IRS does not have a program to automatically identify taxpayers who may have improperly filed Form 1040 instead of Form 1040NR. According to an LMSB International official familiar with examinations of nonresidents, IRS has found that some nonresidents improperly file Form 1040 instead of Form 1040NR. The official told us that nonresidents filing the wrong tax return presents a greater compliance risk than nonresidents failing to file a tax return altogether because withholding is required for most nonresidents earning U.S.-source income regardless of whether they file a tax return. Also, other nonresidents who do not file and for whom taxes are not withheld, such as those working for foreign employers, may not have tax liabilities because of tax treaty benefits. On the other hand, nonresidents who file Form 1040 instead of Form 1040NR may claim credits or take deductions to which they are not entitled, which may lead to reduced tax revenue. IRS may be able to systematically identify nonresidents who improperly file Form 1040 instead of 1040NR. As previously discussed, nonresidents must obtain an ITIN to file a tax return if they do not meet the requirements to obtain a SSN. IRS can identify Forms 1040 filed using ITINs. IRS can also identify Forms 1040 filed jointly by married individuals that included both an ITIN and a SSN, as nonresidents married to U.S. citizens or residents can choose to be treated as residents and file Form 1040 jointly with their spouses. IRS may also be able to use information from ITIN applications (Form W-7) to further refine the identification of taxpayers who may have filed the wrong tax return because ITIN applicants indicate if they are resident or nonresident aliens, or a spouse or dependent of either, on that form. LMSB International officials told us that IRS may be able to effectively use such a filtering process in its enforcement efforts. As previously discussed, LMSB International is planning on initiating additional enforcement actions against nonresidents that would be less time consuming and complex than the face-to-face examinations it has traditionally conducted. The officials told us that if IRS were able to identify nonresidents who may have improperly filed Form 1040 instead of Form 1040NR, IRS could examine some of those individuals through correspondence, for example those who took large deductions that would not be allowed when filing Form 1040NR. Likewise, IRS could review Forms 1040 filed jointly by a married couple where one filer used an ITIN to ensure that the return included the couple’s worldwide income, and not just their U.S.-source income, as is required by U.S. tax law. The officials told us that it would be worthwhile to test the identification process to determine the size of the potential examination inventory and the cost-effectiveness of working on these examination cases. In 2008, LMSB designated reporting and withholding on U.S. income paid to foreign individuals as a high-priority issue. U.S. persons or entities who make payments of certain types of U.S.-source income to nonresidents generally must withhold tax at a rate of 30 percent on suc payments, unless there are applicable tax treaty provisions allowing for reduced rate. Such payments are generally subject to reporting on For 1042 (Annual Withholding Tax Return for U.S. Source Income of For eign Persons) and Form 1042-S (Foreign Person’s U.S. Source Income Subject to Withholding). The person or entity making these payments—generally referred to as a U.S. withholding agent—is responsible for the withholding and reporting. IRS’s focus for this issue is on the compliance of U.S. withholding agents with regard to these reporting and withholding responsibilities. The impetus behind designating U.S.-source income reporting and withholding as a priority issue was two-fold, according to an LMSB International official. First, in September 2008, the Permanent Subcommittee on Investigations of the Senate Committee on Homeland Security and Government Affairs issued a report on actions foreign individuals take to avoid payment of taxes on U.S. stock dividends. The report brought attention to the problem of withholding agents not reporting and withholding proper amounts of tax. Second, IRS historically had not taken actions to enforce compliance with the requirements for reporting and withholding on payments to nonresidents. The U.S.-source income reporting and withholding initiative is made up of three components, according to LMSB International officials. First, IRS is attempting to address intermediary (e.g., hedge funds’ and other financial institutions’) marketing of aggressive tax positions, such as through instruments like total return swaps, which may allow taxpayers to avoid taxation on income that would otherwise be taxed at 30 percent. Second, IRS has begun to match filed Forms 1042-S to Forms 1040 or 1040NR to determine if taxpayers are underreporting income. Third, IRS has initiated a number of compliance projects. LMSB International has started a marketing campaign within IRS to increase focus on withholding agent compliance by, for example, encouraging examiners to look for taxpayers with foreign addresses when reviewing businesses’ payroll information. LMSB International has also begun testing whether it can identify entities that filed Forms 5471 (Information Return of U.S. Persons With Respect To Certain Foreign Corporations) or 5472 (Information Return of a 25 Percent Foreign- Owned U.S. Corporation or a Foreign Corporation Engaged in a U.S. Trade or Business) but failed to report payments to nonresidents on Form 1042-S. IRS found for a test group of 10 corporations, 9 failed to report some payments. However, it was unclear if these payments were exempt because of tax treaties and as such appropriate to exclude from reporting. IRS is continuing this test. IRS uses Central Withholding Agreements to minimize tax compliance risk for athletes and entertainers, who often are high earners relative to other nonresident aliens. As shown in table 4, the number of agreements and amounts withheld has increased over the past 3 fiscal years, although the average amounts withheld have fluctuated. The number of sailing permits filed annually has decreased substantially over past decades. As we reported in 1988, the number of Form 1040-C sailing permits filed dropped from about 176,000 in calendar year 1960 to 1,245 in fiscal year 1986. According to an LMSB International official, about 1,000 Forms 1040-C were filed for tax year 2006. Likewise, neither IRS nor the U.S. Customs and Immigration Service have enforced the sailing permit requirement for departing aliens for decades, according to LMSB International officials. These officials told us that IRS cannot realistically enforce the sailing permit requirement given the volume of foreign individuals who depart the U.S. daily. Enforcing the requirement would be particularly burdensome, as IRS would have to check all aliens for sailing permits even though the requirement is only applicable to some. For example, only a portion of foreign individuals enter the U.S. for business purposes. According to DHS data, about 74 percent of visitor admissions were for pleasure rather than for business or other purposes in fiscal year 2007. That few individuals file sailing permits and IRS does not enforce the filing requirement may not represent a significant compliance risk. Tax withholding is generally required on payments of U.S.-source income to nonresident aliens. Such withholding reduces the chance that nonresidents will depart the country without paying taxes owed. Furthermore, although foreign employers may not withhold U.S. taxes on U.S-source income payments made to nonresidents, those individuals may not have substantial tax liabilities because of tax treaties. As previously discussed, at least 78 percent of admissions to the U.S. in fiscal year 2007 were of individuals residing in countries with which the U.S. had a tax treaty. On the other hand, there may be a downside to having a requirement that is not enforced. Nonresidents who recognize that IRS does not enforce the sailing permit requirement may assume that IRS will not enforce other requirements, which could lead to broader noncompliance. Representatives from groups that work with employers and nonresidents to assist them in fulfilling their tax obligations told us that they were aware that IRS has not enforced the sailing permit requirement in decades. Also, according to an LMSB International official, the existence of a requirement could even negatively affect overall tax compliance in that some foreign individuals who file the Form 1040-C version of the sailing permit may not realize that they have to file a tax return after the year’s end and pay any additional tax that was not paid in conjunction with filing Form 1040-C. Finally, although few aliens file sailing permits, IRS incurs at least some cost to process filed permits and maintain guidance concerning the requirement. Much has changed since Congress developed the tax rules for nonresident aliens. The world economy is increasingly interconnected and the number of aliens entering the U.S. for business purposes has increased accordingly. Congress passed legislation in 1936 to lessen the tax compliance burden for nonresidents paid by foreign employers in the U.S. for short periods of time. However, inflation has eroded the effect of the dollar threshold Congress established and nonresidents increasingly may have to file tax returns if they are in the U.S. for business for only a few days. Another requirement that has been effectively eroded by the increase in travel to the U.S and other tax laws is the requirement that aliens obtain certificates of compliance, otherwise known as sailing permits. For nonresidents working for U.S. employers, withholding has supplanted sailing permits as the primary way to minimize compliance risk. Nonresidents working for foreign employers may not have substantial tax liabilities because of tax treaty benefits. Further, few nonresidents obtain sailing permits. IRS does not enforce the requirement, and it likely could not effectively enforce the requirement given the volume of foreign individuals departing the country daily. A lack of enforcement may also lead taxpayers to conclude that IRS does not enforce other filing requirements. Taken together, these conditions call into question whether the sailing permit requirement is still necessary to ensure compliance. Despite an increased focus on nonresident alien tax enforcement, IRS may be missing an opportunity to identify more potentially noncompliant taxpayers because it does not systematically identify nonresidents filing the incorrect type of tax return. If IRS were able to identify taxpayers who should have filed Form 1040NR instead of Form 1040 by using information reported on tax returns or ITIN applications, it may be able to cost- effectively address this form of noncompliance for some taxpayers. Without further study, IRS cannot know if this type of enforcement action would be cost-effective. Given the increasing extent of business travel to the U.S. and the eroding effect of inflation, Congress should consider raising the amount of U.S. income paid by a foreign employer that is exempt from tax for nonresidents who meet the other conditions of the exemption. Also, given the difficulty of enforcing the requirement for aliens to obtain certificates of compliance—sailing permits—before departing the country and the existence of withholding requirements and tax treaties, Congress should consider eliminating the sailing permit requirement. We recommend that the Commissioner of Internal Revenue determine if creating an automated program to identify nonresident aliens who may have improperly filed Form 1040 instead of Form 1040NR by using ITIN information would be a cost-effective means to improve compliance. In a March 31, 2010, letter responding to a draft of this report, IRS’s Deputy Commissioner for Services and Enforcement stated that IRS agreed to study the feasibility of an automated system to identify nonresident aliens who improperly file Form 1040 instead of Form 1040NR, including whether information from ITIN applications can be effectively analyzed with such an automated system. The letter also stated that IRS would continue to look for ways to improve nonresident alien tax compliance through enforcement and outreach. For the full text of IRS’s comments, see appendix II. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this report. At that time, we will send copies of the report to the Commissioner of Internal Revenue and other interested parties. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To provide data on nonresident alien tax filing, we obtained and reviewed statistics from the Internal Revenue Service’s (IRS) compliance data warehouse (CDW) on Form 1040NR, the U.S. Nonresident Alien Income Tax Return, for tax year 2003 to tax year 2007, the last 5 years for which complete filing data were available. To provide context on these statistics, we reviewed published data on other individual taxpayers from IRS’s Statistics of Income program, which draws from a widely used database composed of a sample of unexamined income tax returns. We determined that the estimates provided had sampling errors of less than 1 percent. We then assessed these two data sources for reliability purposes. To do this, we interviewed IRS research officials, conducted logic testing, and compared certain CDW data elements received by IRS to publicly available data on Form 1040NR filings. On the basis of our assessment, we determined that both sources used were sufficiently reliable for the purposes of our review. To identify the availability of compliance data, we reviewed IRS documentation on the National Research Program and interviewed IRS research and compliance officials. We also examined documentation on tax treaties, visa issuance data from the Department of State, and the number of annual admissions of foreign visitors from the Department of Homeland Security (DHS), in order to provide context as to the potential number of nonresident aliens with a filing requirement or incurring a tax liability each year. To provide information on guidance IRS provides to nonresident aliens and associated third parties on tax and filing requirements and any burdens and challenges associated with filing, we reviewed IRS tax forms, guidance, and outreach materials. We also interviewed IRS officials responsible for conducting outreach efforts and groups that work with employers and nonresidents to assist them in fulfilling their tax obligations. More specifically, we conducted group interviews with members of the American Institute of Certified Public Accountants, the National Association of Enrolled Agents, and the National Association of College and University Business Officers, and spoke with staff from accounting and law firms that have nonresident aliens or their employers as clients. To assess actions that IRS takes to enforce nonresident alien tax compliance, we used IRS’s goal in its 2009-2013 strategic plan of increasing resource allocation to priority areas as criteria. We reviewed data from IRS’s enforcement programs and interviewed IRS enforcement officials to determine whether resources were increased for nonresident alien compliance efforts and what results IRS had achieved. Specifically, we reviewed IRS data on examinations and Central Withholding Agreements and various IRS tax forms, and interviewed IRS officials to discuss potential opportunities to expand enforcement efforts. We then assessed these IRS sources for reliability by reviewing IRS documentation and interviewing agency officials and determined that these sources were sufficiently reliable for the purposes of our review. We conducted this performance audit from July 2009 through April 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Joanna Stamatiades, Assistant Director; Jeff Arkin; Amy Bowser; Karen O’Conor; Amy Radovich; Cynthia Saunders; and John Zombro made key contributions to this report.
Every year, the U.S. receives millions of legal visits by foreign individuals. Nonresident aliens--who are neither U.S. citizens nor residents--may be required to file a federal tax return if they earn U.S.- source income, and their noncompliance can contribute to the tax gap. As with U.S. citizens and residents, the Internal Revenue Service (IRS) is responsible for ensuring that nonresident aliens fulfill their tax obligations. GAO was asked to (1) identify what data are available on nonresident alien tax filing and compliance, (2) provide information on guidance IRS provides to nonresident aliens and third parties on tax requirements and any challenges associated with filing, and (3) assess actions IRS takes to enforce nonresident alien tax compliance. To meet its objectives, GAO examined IRS and other federal agency documentation, reviewed tax filing and other data, and interviewed IRS officials and other third parties. For tax year 2007, nonresident alien individuals filed about 634,000 Forms 1040NR, the U.S. Nonresident Alien Income Tax Return. IRS has not developed estimates for the extent of nonresident alien tax noncompliance because it often lacks information to distinguish between nonresident aliens and other filers, and examinations can be costly and difficult since many nonresident aliens would depart the country before IRS could examine their returns. IRS's outreach and education efforts have focused on presenting information on nonresident tax issues to a variety of audiences and making information available on its Web site and in its publications. Nevertheless, some nonresidents, their employers, and paid preparers may not be aware of nonresident alien tax rules, according to representatives of groups that work with employers and nonresidents to assist them in fulfilling their tax obligations. Other filing challenges exist. For example, individuals filing Forms 1040NR cannot file electronically. Also, nonresidents in the U.S. for less than 90 days who earn over $3,000 in compensation for services paid for by a foreign employer will likely have to file Form 1040NR, even if they owe no tax. The $3,000 exemption threshold, enacted by Congress in 1936 to lessen the tax compliance burden on nonresident aliens and never adjusted for inflation or other purposes, likely results in a greater proportion of nonresident aliens having a filing requirement today than in 1936. IRS has expanded its nonresident alien enforcement efforts over the past decade. However, IRS does not have a program to automatically identify nonresident aliens who improperly file Form 1040 instead of Form 1040NR, which can result in lost tax revenue when these taxpayers take unallowed deductions. IRS may be able to use taxpayer information to identify this type of noncompliance systematically. Finally, some nonresidents must file a certificate of compliance, referred to as a sailing permit, before departing the U.S. to ensure that tax obligations have been satisfied. The requirement is difficult to enforce and few nonresidents fulfill it, potentially leading to broader noncompliance if individuals assume the lack of enforcement extends to other tax rules. GAO suggests that Congress consider raising the exemption threshold for income paid by a foreign employer and eliminating the certificate of compliance, or sailing permit, requirement. GAO also recommends that IRS determine if creating an automated program to identify improper filing of Form 1040 by nonresident aliens would be a cost-effective means of improving compliance. In commenting on a draft of this report, IRS agreed with our recommendation.
Title XIX of the Social Security Act authorizes federal funding to states for Medicaid, which finances health care for certain low-income children, families, and individuals who are aged or disabled. Although states have considerable flexibility in designing and operating their Medicaid programs, they must comply with federal requirements specified in Medicaid statute and regulation. For example, states must provide methods to ensure that payments for services are consistent with economy, efficiency, and quality of care. Medicaid is an entitlement program: states are generally obligated to pay for covered services provided to eligible individuals, and the federal government is obligated to pay its share of a state’s expenditures under a CMS-approved state Medicaid plan. Our prior and current work addresses five categories of Medicaid claims where we are aware that states have reimbursement-maximizing strategies. Our current work in particular concentrated on these five categories because—on the basis of factors such as nationwide growth in dollars claimed, the results of our past reviews, and work by HHS’s Office of Inspector General (OIG) to assess the appropriateness of claims in these categories—we judged them to be of particularly high risk. Over the past few years, states’ claims in some of these categories have grown significantly in dollar amounts. The five categories of claims we examined, and recent trends in claimed expenditures, are described in table 1. For many years, states have used varied financing schemes, sometimes involving IGTs, to inappropriately increase federal Medicaid reimbursements. Some states, for example, have made large Medicaid payments to certain providers, such as nursing homes operated by local governments, which have greatly exceeded the established Medicaid payment rate. These transactions create the illusion of valid expenditures for services delivered by local-government providers to Medicaid-eligible individuals and enable states to claim large federal reimbursements. In reality, the spending is often only temporary because states require the local governments to return all or most of the money to the states through IGTs. Once states receive the returned funds, they can use them to supplant the states’ own share of future Medicaid spending or even for non-Medicaid purposes. As various schemes involving IGTs have come to light, Congress and CMS have taken actions to curtail them, but as one approach has been restricted, others have often emerged. Table 2 describes some of the states’ financing schemes over the years and how Congress and CMS have responded to them. A leading variant of these illusory financing arrangements today involves states’ taking advantage of Medicaid’s upper payment limit (UPL) provisions. Although states are allowed, under law and CMS policy, to claim federal reimbursements for supplemental payments they make to providers up to the UPL ceilings, we have reported earlier that payments in excess of the provider’s costs that are not retained by the provider as reimbursement for services actually provided are inconsistent with Medicaid’s federal-state partnership and fiscal integrity. For example, we have reported that by paying nursing homes and hospitals owned by local governments much more than the established Medicaid payment rate and requiring the providers to return, through IGTs, the excess state and federal payments to the state, states obtain excessive federal Medicaid reimbursements while their own state expenditures remain unchanged or even decrease. Such round-trip payment arrangements can be accomplished via electronic wire transfer in less than an hour. States have then used the returned funds to pay their own share of future Medicaid spending or to fund non-Medicaid programs. Problems with excessive supplemental payment arrangements remain, despite congressional and CMS action to curtail financing schemes. For example, in our current review of states’ use of contingency-fee consultants, we found an example in Georgia that illustrates how current law and policy continue to allow states to generate excessive federal reimbursements beyond established Medicaid provider payments for covered services. Georgia and its consultant developed five UPL arrangements using IGTs—one each for local-government-operated inpatient hospitals, outpatient hospitals, nursing homes and for state- owned hospitals and nursing homes. Over the 3-year period of state fiscal years 2001 through 2003, the state made supplemental payments totaling $2.0 billion to nursing homes and hospitals operated by local governments (see fig. 1). A sizable share of the $2.0 billion payments was illusory, however. In reality, the nursing homes and hospitals netted only $357 million because they had initially transferred $1.7 billion to the state Medicaid agency, through IGTs, under an agreement with that agency. The state combined this $1.7 billion with $1.2 billion in federal funds, which represented the estimated federal share of its supplemental payments to local-government facilities of $2.0 billion. The state thus had a funding pool of $2.9 billion at its disposal. From this pool, the state made the $2.0 billion in supplemental payments to local-government providers and retained $844 million to offset its other Medicaid expenditures. In our view, the inappropriate use of IGTs in schemes such as UPL financing arrangements violates the fiscal integrity of Medicaid’s federal- state partnership in at least three ways. The schemes effectively increase the federal matching rate established under federal law by increasing federal expenditures while state contributions remain unchanged or even decrease. We previously estimated that one state effectively increased the federal share of its total Medicaid expenditures from 59 percent to 68 percent in state fiscal year 2001, by obtaining excessive federal funds and using these as the state’s share of other Medicaid expenditures. There is no assurance that these increased federal reimbursements are used for Medicaid services, since states use funds returned to them via these schemes at their own discretion. In examining how six states with large schemes used the federal funds they generated, we previously found that one state used the funds to help finance its education programs, and others deposited the funds into state general funds or other special state accounts that could be used for non-Medicaid purposes or to supplant the states’ share of other Medicaid expenditures. The schemes enable states to pay a few public providers amounts that well exceed the costs of services provided, which is inconsistent with the statutory requirement that states provide for methods that ensure that Medicaid payments are consistent with economy and efficiency. We previously reported that, in one state, the state’s proposed scheme increased the daily federal payment per Medicaid resident from $53 to $670 in six local-government-operated nursing homes. Another category of claims where states have used questionable practices to maximize federal reimbursements is services provided to children in schools and associated administrative costs. Medicaid is authorized to cover services to, for example, Medicaid-eligible children with disabilities who may need diagnostic, preventive, and rehabilitative services; speech, physical and occupational therapies; and transportation. School districts may also receive Medicaid reimbursement for the administrative costs of providing school-based Medicaid services. Our work in this area has addressed claims for Medicaid school-based health services and administration. In 1999, we found a need for federal oversight of growing Medicaid reimbursements to states for Medicaid school-based administrative services, including outreach activities to enroll children in Medicaid. In April 2000, we reported that Medicaid expenditures for school-based health services totaled about $1.6 billion for services provided by schools in 45 states and the District of Columbia, while Medicaid administrative expenditures were about $712 million for costs billed by schools in 17 states. We found that some of the methods used by school districts and states to claim reimbursement for school-based health services did not ensure that the services paid for were provided: some claims, for example, were made solely on the basis of at least one day’s attendance in school, rather than on documentation of any actual service delivery. Methods used by school districts to claim Medicaid reimbursement failed in some cases to take into account variations in service needs among children. With regard to Medicaid school-based administrative costs, we found that some methods used by school districts and states did not ensure that administrative activities were properly identified and reimbursed. Poor controls resulted in improper payments in at least two states, and there were indications that improprieties could have been occurring in several other states. We further found that, in some states, funding arrangements among schools, states, and private consulting firms created adverse incentives for program oversight and caused schools to receive a small portion—as little as $7.50 for every $100 in Medicaid claims—of Medicaid reimbursement for school-based administrative and service claims. We reported that 18 states retained a total of $324 million, or 34 percent, of federal funds intended to reimburse schools for their Medicaid administrative and service claims; for 7 of the states, this amounted to 50 to 85 percent of federal Medicaid reimbursement for school-based health services claims. In addition, contingency fees, which some school districts paid to private consultants for their assistance in preparing and submitting Medicaid claims, ranged from 3 to 25 percent of the federal reimbursement, further reducing the net amount that schools received. In response to recommendations we made to the Administrator of CMS, CMS has clarified guidance for states on submitting claims for school- based administrative activities. Subsequent to our work, HHS OIG conducted reviews of school-based claims in 18 states from November 2001 through June 2005, several of which have identified issues with the appropriateness of claims related to consultants’ projects. In our own most recent work, we determined that Georgia was retaining a share of the additional federal reimbursements gained from its claims for Medicaid school-based services. Georgia’s contingency-fee consultant assisted the state with its Medicaid claims for school-based services in a project that generated about $54 million in federal Medicaid reimbursements over the 3 years the consultant was paid and that, on the basis of state data, we estimate continues to generate about $25 million annually. As before, we found that the school districts were not receiving all of the federal Medicaid reimbursements that were generated on their behalf. According to a state official and documents provided by the state, the state retained $3.9 million, or 16 percent, of federal reimbursements that were claimed on behalf of the school districts for state fiscal year 2003, most of which was used to pay its contingency-fee consultant and about $1 million of which was used to cover the salaries and administrative costs of the five state employees who administered school- based claims in Georgia. A growing number of states are using consultants on a contingency-fee basis to maximize federal Medicaid reimbursements. CMS reported that, according to a survey it conducted in 2004, 34 states had used consultants on a contingency-fee basis for this purpose, an increase from 10 states reported to have such arrangements in 2002. In the 2 states where we examined selected projects that involved the assistance of contingency-fee consultants, Georgia and Massachusetts, we found that the projects generated a significant amount of additional federal reimbursements for the states: from fiscal year 2000 through 2004, an estimated $1.5 billion in Georgia and nearly $570 million in Massachusetts. For those additional reimbursements, Georgia paid its consultant about $82 million in contingency fees, and Massachusetts paid its consultants about $11 million in contingency fees. We identified claims from contingency-fee consultant projects that appear to be inconsistent with current CMS policy and claims that are inconsistent with federal law; we also identified claims from projects that undermine Medicaid’s fiscal integrity. Such projects and resulting problematic claims arose in each of the five categories of claims that we reviewed in Georgia, Massachusetts, or for some categories, both states. We observed two factors common to many projects that we believe increase their risk. First, many projects were in categories of Medicaid claims where federal requirements for the services have been inconsistently applied, are evolving, or were not specific. Second, many projects involved states’ shifting costs to the federal government through Medicaid reimbursements to other state or local-government entities. For the five categories of claims we reviewed where states frequently used contingency-fee consultants to maximize their federal Medicaid reimbursements, we identified problematic claims in each category in either Georgia or Massachusetts or in both states. These projects resulted in claims that appear to be inconsistent with current CMS policy and that, for one project, were inconsistent with federal law. We also identified claims that were inconsistent with the fiscal integrity of the Medicaid program. I have already discussed our current findings regarding Georgia’s use of IGTs in UPL supplemental payment arrangements and its project to increase claims for school-based Medicaid services and administrative costs. We also reviewed Georgia’s and Massachusetts’s use of contingency- fee consultants to increase federal reimbursements for targeted case management services, rehabilitation services for mental or physical disabilities, and states’ claims for administering their Medicaid programs. In these two states, our findings were most significant in the areas of targeted case management and rehabilitation services. Georgia and Massachusetts—with the help of their contingency-fee consultants—developed approaches to maximize federal Medicaid reimbursements by claiming costs for targeted case management (TCM) services under state plan amendments that CMS had approved prior to 2002. Georgia’s consultant assisted the state in increasing federal Medicaid reimbursement for TCM services provided by two state agencies: the Department of Juvenile Justice and the Division of Family and Children’s Services. In Massachusetts, contingency-fee consultants helped the state increase federal reimbursement for TCM services provided by three state agencies: the Departments of Social Services, Youth Services, and Mental Health. These case management services in Georgia and Massachusetts appear integral to the states’ own programs; the states’ laws, regulations, or policies called for case management services in these programs, and the case management services were provided to all Medicaid- and non- Medicaid-eligible children served by the programs. More recently, CMS has denied coverage for comparable services by other states because CMS determined that the services are an integral component of the state programs providing the services. For example, in fiscal year 2002, CMS denied a state plan amendment proposal to cover TCM services in Illinois and in fiscal year 2004 it found TCM claims in Texas unallowable, in part because the TCM services claimed for reimbursement were considered integral to other state programs. As in Georgia and Massachusetts, the TCM services in Illinois were for children served by the state’s juvenile justice system. In Texas, such children were served by the state’s child welfare and foster care system. In fiscal year 2003, we estimate that Georgia received $17 million in federal reimbursements for claims for TCM services provided by its two state agencies, of which about $12 million was for services that appear to be integral to non-Medicaid programs. In fiscal year 2004, Massachusetts received an estimated $68 million in federal reimbursements for services that appear to be integral to non-Medicaid programs in the three state agencies whose TCM projects were developed by consultants. CMS officials agreed with our assessment that the claims for TCM services in these two states were problematic. Our review of projects involving rehabilitation services found concerns with methods and claims in Georgia. Georgia’s consultant helped the state increase federal Medicaid reimbursements for rehabilitation services provided through two state agencies by $58 million during state fiscal years 2001 through 2003. The consultant suggested that state agencies— which pay private facilities under a per diem rate for providing room and board, rehabilitation counseling and therapy, educational, and other services to children in state custody—base their claims for Medicaid reimbursement on the private facilities’ estimated costs, instead of on what the state agencies actually paid those facilities. The state agencies increased their claims for Medicaid reimbursement without increasing their payments to the facilities. In some cases, the state agencies’ Medicaid claims for rehabilitation services alone exceeded the amount paid by the agencies for all the services the facilities provided to children. Specifically, for 82 of the residential facilities (about 43 percent), the amount the state Medicaid agency reimbursed the two agencies in state fiscal year 2004 exceeded the total amount these agencies actually paid the residential facilities for all services, not just rehabilitation services. One facility, for example, was paid by the Division of Family and Children’s Services $37 per day per eligible child for all services covered by the per diem payment, but the state agency billed the Medicaid program $62 per day for rehabilitation services alone. CMS officials agreed with our conclusion that claims from this contingency-fee project were not in accord with the statutory requirement that payments be efficient and economical. During our work we observed two factors that appear to increase the risk of problematic claims. One factor involved federal requirements that were inconsistently applied, evolving, or not specific; the second involved states’ claiming Medicaid reimbursement for services provided by other state or local-government agencies. Despite CMS’s long-standing concern about state financing arrangements for both TCM and supplemental payments, for example, the agency has not issued adequate guidance to clarify expenditures allowable for federal reimbursement. Federal TCM and supplemental payment policy for allowable claims in these categories has evolved over time, and the criteria that CMS applies to determine whether claims are allowable have been communicated to states primarily through state-specific state plan amendment reviews or claims disallowances, rather than through formal guidance or regulation. Inconsistently applied policy for allowable TCM services. In 2002, CMS began to deny proposed state plan amendments that sought approval for Medicaid coverage of TCM services that were the responsibility of other state agencies. CMS had determined that such arrangements were not eligible for federal Medicaid reimbursement for several reasons: (1) the services were typically integral to existing state programs, (2) the services were provided to beneficiaries at no charge, and (3) beneficiaries’ choice of providers was improperly limited. However, CMS approved Georgia’s and Massachusetts’s state plan amendments for TCM services before 2002. Although CMS has been applying these criteria to deny new TCM arrangements—for example, in Maryland, Illinois, and Texas—it has not yet sought to address similar, previously approved TCM arrangements that are inconsistent with these criteria. CMS regional officials told us they could not reconsider the TCM claims from two agencies in Georgia and four in Massachusetts because they were waiting for new guidance that the agency was preparing. CMS has been working on new TCM guidance for more than 2 years, according to agency officials. As of May 2005, however, this guidance had not been issued. CMS’s fiscal year 2006 budget submission identifies savings that could be achieved by clarifying allowable TCM services, but CMS had not published a specific proposal at the time we completed our work. Evolving policy for allowable supplemental payment arrangements. For several years, we and others have reported on state financing schemes that allow states to inappropriately generate federal Medicaid reimbursement without the state’s paying its full share. Although Congress and CMS have taken steps to curb these abuses, states can still develop arrangements enabling them to make illusory payments to gain federal reimbursements for their own purposes. Recognizing that states can unduly gain from supplemental payment arrangements, such as UPL payment arrangements that use IGTs, since fiscal year 2003 CMS has worked with individual states to address such arrangements. At the same time, the agency has not issued guidance stating its policy on acceptable approaches for UPL payment arrangements, specifically the use of IGTs and the relationship to state share of spending. CMS’s budget for fiscal year 2006 proposes to achieve federal Medicaid savings by curbing financing arrangements that have been used by a number of states to inappropriately obtain federal reimbursements. The specific proposal, however, had not been published at the time we completed our review. Unspecified policy on allowable Medicaid rehabilitation payments to other state agencies. CMS has not issued policy guidance that addresses situations where Medicaid payments are made by a state’s Medicaid agency to other state agencies for rehabilitation services. CMS financial management officials told us that states’ claims for rehabilitation services posed an increasing concern, in part because officials believed that states were inappropriately filing claims for services that were the responsibility of other state programs. CMS does not specify whether claims for the cost of rehabilitation services that are the responsibility of non-Medicaid state agencies are allowable. CMS’s fiscal year 2006 budget submission identifies savings that could be achieved by clarifying appropriate methods for claiming rehabilitation services. CMS had not published a specific proposal at the time we completed our review. The second factor we observed that increased the financial risk to the federal government of reimbursement-maximizing projects was that the projects shifted state costs to the federal government by claiming Medicaid reimbursement for services provided by other non-Medicaid state or local government agencies. Medicaid reimbursement to government agencies serving Medicaid beneficiaries is allowable in cases where the claims apply to covered services and the amounts paid are consistent with economy and efficiency. However, the projects and associated claims we reviewed showed that reimbursement-maximizing projects often involved services and circumstances that Medicaid should not pay for—such as illusory payments to government providers. As we describe in the report issued today, the problems we identified with states’ Medicaid claims stemming from contingency-fee projects illustrate the urgent need to address certain issues in CMS’s overall financial management of the Medicaid program. These issues, however, are not limited to situations that involve contingency-fee consultants. We have identified problems with claims in states other than Georgia and Massachusetts that have undertaken reimbursement-maximizing activities, without employing consultants, in categories of long-standing concern, such as supplemental payment arrangements. CMS relies on its standard financial management controls to identify any unallowable Medicaid claims that states may submit, including those that might be associated with reimbursement-maximizing contingency-fee projects. However, CMS lacks clear, consistent policies to guide the states’ and its own financial oversight activities. Furthermore, in our previous work on CMS’s financial management, we found that the agency did not have a strategy for focusing its resources most effectively on areas of high risk. In our current work, we found that CMS has known for some time that two high- risk categories we identified—claims generated from consultants paid on a contingency-fee basis to maximize reimbursements and claims generated from arrangements where state Medicaid programs are paying other state agencies or government providers—were problematic. For example, CMS had listed these two categories on a financial tracking sheet of high-risk areas as of 2000. At an October 2003 congressional hearing, the CMS Administrator expressed concern that the Medicaid program was understaffed and that consultants in the states were “way ahead of” CMS in helping states take advantage of the Medicaid system. CMS has undertaken important steps to improve its financial management of the Medicaid program. A major component of the agency’s initiative is hiring, training, and deploying approximately 100 new financial analysts, mainly to regional offices. These analysts are responsible for identifying state sources of Medicaid funding and contributing to the review of state budget estimates and expenditure reports. Expectations for CMS’s new Division of Reimbursement and State Financing and for the 100 new financial analysts are high and their responsibilities broad. It is too soon, however, to assess their accomplishments. For more than a decade, we and others have reported on the methods states have used to inappropriately maximize federal Medicaid reimbursement and have made recommendations to end financing schemes. CMS has taken important steps in recent years to improve its financial management. Yet more can be done. Many of the problematic methods we examined involved categories of claims where CMS policy has been inconsistently applied, evolving, or unspecified. They have also involved increasing payments to units of state and local government—which states have long used to maximize federal Medicaid funding, in part because IGTs can help facilitate illusory payments—suggesting that greater CMS attention is needed to payments among levels of government, regardless of whether consultants are involved. We believe that it is important to act promptly to curb opportunistic financing schemes before they become a staple of state financing and further erode the integrity of the federal-state Medicaid partnership. Addressing recommendations that remain open from our prior work on state financing schemes and on CMS’s financial management could help resolve some of these issues. In addition, in the report being issued today, we are making new recommendations to the Administrator of CMS to improve the agency’s oversight of states’ use of contingency-fee consultants and to strengthen certain of the agency’s overall financial management procedures. These recommendations address developing guidance to clarify CMS policy on TCM, supplemental payment arrangements, rehabilitation services, and Medicaid administrative costs; ensuring that such guidance is applied consistently among states; and collecting and scrutinizing information from states about payments made to units of state and local governments. Understandably, states that have relied on certain practices to increase federal funds as a staple for the state share of Medicaid spending are concerned about the potential loss of these funds. The continuing challenge remains to find the proper balance between states’ flexibility to administer their Medicaid programs and the shared federal-state fiduciary responsibility to manage program finances efficiently and economically in a way that ensures the fiscal integrity of the program. States should not be held solely responsible for developing arrangements that inappropriately maximize federal reimbursements where policies have not been clear or clearly communicated or where CMS has known of risks for some time and has not acted to mitigate them. Without clear and consistent communication of policies regarding allowable claims in high-risk areas, such as those for TCM and UPL where billions of dollars are claimed each year, CMS is at risk of treating states inconsistently and of placing undue burdens on states to understand federal policy and comply with it. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions that you or Members of the Committee may have. For future contacts regarding this testimony, please call Kathryn G. Allen at (202) 512-7118. Katherine Iritani, Ellen M. Smith, Helen Desaulniers, and Kevin Milne also made key contributions to this testimony. Medicaid Fnancing: Sates’ Use of Coningency-Fee Consultants to Maximize Federal Rembursements Highlighs Need for Improved Federal ti Oversight. GAO-05-748. Washington, D.C.: June 28, 2005. High-Rsk Seres: An Updae. GAO-05-207. Washington, D.C.: January 2005. Medicaid Program Integrity: Sate and Federal E orts to Prevent and Detect Improper Payments. GAO-04-707. Washington, D.C.: July 16, 2004. Medicaid: ntergovernmena Transers Have Fac tated State Fnancing Schemes. GAO-04-574T. Washington, D.C.: March 18, 2004. Medicaid: mproved Federal Oversight of State Fnancing Schemes Is Needed. GAO-04-228. Washington, D.C.: February 13, 2004. Medicaid Fnancial Management: Beter Oversght of State Claims for Federal Rembursement Needed. GAO-02-300. Washington, D.C.: February 28, 2002. Medicaid: HCFA Reversed Is Posi on and Approved Addi onal Stae Financing Schemes. GAO-02-147. Washington, D.C.: October 30, 2001. Medicaid: Sae Financng Schemes Agan Drive Up Federal Payments. GAO/T-HEHS-00-193. Washington, D.C.: September 6, 2000. Medicaid n Schools: mproper Paymens Demand mprovements in HCFA Oversight. GAO/HEHS/OSI-00-69. Washington, D.C.: April 5, 2000. Medicaid n Schools: Poor Oversght and Improper Payments Compromse Potenta Bene t. GAO/T-HEHS/OSI-00-87. Washington, D.C.: April 5, 2000. Medicaid: Quesionable Practices Boost Federal Payments for School- Based Services. GAO/T-HEHS-99-148. Washington D.C.: June 17, 1999. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Medicaid--the federal-state health care financing program covering almost 54 million low-income people at a cost of $276 billion in fiscal year 2003--is by its size and structure at significant risk of waste and exploitation. Because of challenges inherent in overseeing the program, which is administered federally by the Centers for Medicare & Medicaid Services (CMS), GAO added Medicaid to its list of high-risk federal programs in 2003. Over the years, states have found various ways to maximize federal Medicaid reimbursements, sometimes using consultants paid a contingency fee to help them do so. From earlier work and a report issued today (GAO-05-748), GAO's testimony addresses (1) how some states have inappropriately increased federal reimbursements; (2) some ways states have increased federal reimbursements for school-based Medicaid services and administrative costs; and (3) how states are using contingency-fee consultants to maximize federal Medicaid reimbursements and how CMS is overseeing states' efforts. For many years, GAO has reported on varied financing schemes and questionable methods used by states to increase the federal reimbursements they receive for operating their state Medicaid programs. These schemes and methods can undermine Medicaid's federal-state partnership and threaten its fiscal integrity. For example, some states make large supplemental payments to government-owned or government-operated entities for delivery of Medicaid services while requiring these entities to return the payments to the state. This process creates the illusion of valid expenditures in order to obtain federal reimbursement, effectively shifting a portion of the state's share of program expenditures to the federal government and increasing the federal share beyond that established by formula under law. Medicaid funding is available for local school districts for certain health services for eligible children and for administrative costs. To claim increased federal Medicaid reimbursement, however, some states and school districts have used methods lacking sufficient controls to ensure that claims were legitimate. GAO also found funding arrangements among schools, states, and private consulting firms where some states retained up to 85 percent of reimbursements for administrative costs. In some cases, school districts paid contingency fees to consultants. A growing number of states are using consultants on a contingency-fee basis to maximize federal Medicaid reimbursements. As of 2004, 34 states--up from 10 states in 2002--used contingency-fee consultants for this purpose. GAO identified claims in each of five categories of claims from contingency-fee projects that appeared to be inconsistent with current CMS policy, inconsistent with federal law, or that undermined the fiscal integrity of the Medicaid program. Problematic projects often were in categories where federal requirements were inconsistently applied, evolving, or not specific. CMS has taken steps to improve its fiscal management of Medicaid, but a lack of oversight and clear guidance from CMS has allowed states to develop new financing methods or continue existing ones that take advantage of ambiguity and generate considerable additional federal costs.
A critical component of high-performing organizations, as envisioned by the Results Act, is the dynamic and complementary process of setting a strategic direction, defining annual goals and measures, and reporting on performance. As required by the Results Act, agencies are to prepare annual performance plans that establish the connections between the long- term goals outlined in their strategic plans and the day-to-day activities of managers and staff. To be useful, annual performance plans should answer three core questions: To what extent does the agency have a clear picture of intended performance? Does the agency have the right mix of strategies and resources needed to achieve its goals? Will the agency’s performance information be credible? At the request of Congress and to assist agencies in their efforts to produce useful performance plans, we issued guides on assessing annual plans. We subsequently reviewed the fiscal year 1999 performance plans for the CFO Act agencies. We also issued reports on fiscal year 1999 plans that identified practices that can improve the usefulness of plans and approaches used to connect budget requests with anticipated results. A majority of agencies’ fiscal year 2000 plans give general pictures of intended performance across the agencies, with the plans of the Department of Labor, Department of Transportation (DOT), the General Services Administration (GSA), and the Social Security Administration (SSA) providing the clearest overall pictures. To assess the degree to which an agency’s plan provides a clear picture of intended performance across the agency, we examined whether it includes (1) sets of performance goals and measures that address program results; (2) baseline and trend data for past performance; (3) performance goals or strategies to resolve mission-critical management problems; and (4) identification of crosscutting programs (i.e., those programs that contribute to the same or similar results), complementary performance goals and common or complementary performance measures to show how differing program strategies are mutually reinforcing, and planned coordination strategies. Figure 3 shows the results of our assessment of the 24 agencies. We categorized each agency’s plan based on the degree to which it collectively addressed the four practices presented above. All of the fiscal year 2000 plans we reviewed contain at least some goals and measures that address program results. In our assessment of the fiscal year 1999 plans, we identified the lack of comprehensive sets of goals that focused on results as one of the central weaknesses that limited the usefulness of the plans for congressional and other decisionmakers. While this improvement is still not evident across all agencies, some plans incorporate sets of performance goals and measures that depict the complexity of the results federal agencies seek to achieve. For example, to help achieve improved public health and safety on the highway, DOT has performance goals and measures to reduce the rates of alcohol-related and large truck-related fatalities and injuries and to increase seat belt use, in addition to its goals related to highway fatality and injury rates. The DOT plan also provides helpful information that explains the importance of each goal, the relationship of annual goals to DOT strategic goals, and the relationship of the performance measures to annual goals. Similarly, the Department of Education’s plan contains a set of goals and measures related to a vital issue of growing national concern—that schools should be strong, safe, disciplined, and drug-free. Specifically, Education has performance goals and measures to reduce the prevalence of alcohol and drugs in schools, decrease criminal and violent incidents committed by students, and increase the percentage of teachers who are trained to deal with discipline problems in the classrooms. The plan includes explanatory information for each goal and measure. For instance, Education explains that it changed its target level for the percentage of students using marijuana at school because of better than expected reductions in 1998. However, we still found cases where program results were not clearly defined. For example, the Small Business Administration’s (SBA) performance plan’s goals and measures continue to generally focus on outputs rather than results. To assess progress in its goal to “increase opportunities for small business success,” SBA relies on measures such as an increase in the number of loans made by SBA, the number of clients served, the number of bonds issued, and the amount of dollars invested in small businesses. This is important information, but the plan does not show how these measures are related to increasing opportunities for small businesses to be successful—the key result SBA hopes to achieve. Sets of performance goals and measures also should provide balanced perspectives on performance that cover the variety of results agencies are expected to achieve. Federal programs are designed and implemented in dynamic environments where mission requirements may be in conflict, such as ensuring enforcement while promoting related services, or priorities may be different, such as those to improve service quality while limiting program cost. Consequently, mission requirements and priorities must be weighed against each other to avoid distorting program performance. The Department of Veterans Affairs’ (VA) Veterans Health Administration (VHA) provides an illustration of an agency that is using a range of goals to reflect the variety of results it seeks to achieve. VHA recognizes that, as it seeks to improve the health status of veterans, it must provide care efficiently. VHA’s primary healthcare strategy has three performance goals to be achieved by fiscal year 2002, referred to as the 30-20-10 strategy. With fiscal year 1997 as the baseline, VHA has separate goals that focus on (1) reducing the cost per patient by 30 percent, (2) increasing the number of patients served by 20 percent, and (3) increasing to 10 percent the portion of the medical care budget derived from alternative revenue sources. VHA’s ability to fund the costs associated with serving 20 percent more patients than in the past will depend in large part on VHA’s success in meeting its goals to decrease the cost per patient and increase revenues from alternative sources. In reviewing the fiscal year 1999 plans, we said that setting multiyear and intermediate goals is particularly useful when it may take years before results are achieved and in isolating an agency’s discrete contribution to a specific result. In examining the fiscal year 2000 plans, we found that some agencies have started to incorporate these practices into their performance plans. For example, the Office of Personnel Management’s (OPM) plan includes multiyear goals that provide valuable perspective on its plans over several years. In particular, the plan has an objective for fiscal year 2002 to simplify and automate the current General Schedule position classification system by reducing the number of position classification standards from more than 400 to fewer than 100. The plan shows that OPM projects that it will reduce the number of classification standards to 320 by the end of fiscal year 1999 and further reduce the number to 216 by the end of fiscal year 2000. Reducing the number of classification standards is seen by OPM as important because it will provide federal agencies with added flexibility to better acquire and deploy their human capital. The Department of Commerce’s National Oceanic and Atmospheric Administration (NOAA) also includes projected target levels of performance for multiyear goals in its plan. As part of its strategic goal to sustain healthy coasts, NOAA set a target for fiscal year 2002 to increase to 75 the percentage of the U.S. coastline where threats to the habitat have been assessed and ranked. NOAA set a target level of 20 percent in fiscal year 2000 from a baseline of 0 percent in fiscal year 1998. In contrast, the Department of the Treasury’s Internal Revenue Service (IRS) provides an example of where multiyear goals could be included in the plan but are not. The plan states that the IRS Restructuring and Reform Act of 1998 requires that 80 percent of all tax and information returns that IRS processes be electronically filed by year 2007. IRS’ plan would be more useful if it discussed this mandate along with target levels to show how it plans to achieve this goal over the next 7 years. Congress will likely expect to receive information relating to IRS’ progress in the area, particularly since IRS has requested funding for this goal. Treasury officials said that they recognize the shortcomings in IRS’ performance measures. As part of its restructuring, IRS is undertaking improvements by developing new performance measures. A few agencies have recognized that using intermediate goals and measures, such as outputs or intermediate outcomes, can show interim progress toward intended results. For example, the Department of Justice’s Drug Enforcement Administration (DEA) has a goal to disrupt and dismantle drug syndicates, but its plan acknowledges that counting the number of cases, arrests, or seizures does not adequately measure the true impact of enforcement efforts. Therefore, addition to those measures, DEA is developing other gauges, such as the ratio of the number of targeted organizations disrupted as a result of DEA involvement in foreign investigations to the total number of targeted organizations. Its plan states that DEA will collect data for this goal in fiscal year 1999. Similarly, SSA recognizes in its plan that one change needed for its disability program is that disabled beneficiaries must become self- sufficient to the greatest extent possible. As a first step toward its strategic objective to “shape the disability program in a manner that increases self- sufficiency,” SSA includes an intermediate goal to increase by 10 percent in fiscal year 2000 the number of Disability Insurance beneficiaries transitioning into trial work periods over time. SSA states that it will develop other goals and measures after an analysis of historical data is completed. All of the fiscal year 2000 plans we reviewed include baseline and trend data for at least some of their goals and measures. With baseline and trend data, the performance plans provide a context for drawing conclusions about whether performance goals are reasonable and appropriate. Decisionmakers can use such information to gauge how a program’s anticipated performance level compares with improvements or declines in past performance. For example, the DOT plan includes graphs for nearly all goals and measures that show baseline and trend data as well as the targets for fiscal year 1999 and 2000. The graphs clearly indicate trends and provide a basis for comparing actual program results with the established performance goals. The performance goal for hazardous material incidents is typical in that it has a graph that shows the number of serious hazardous materials incidents that occurred in transportation during the period 1988 through 1997. DOT also includes explanatory information that provides a context for past performance and future goals. In cases where baseline and trend data are not yet available, the more informative performance plans include information on what actions agencies are taking to collect appropriate data and when they expect to have them. For example, the Department of Housing and Urban Development (HUD) provides baseline and trend data for many of its goals and measures, if such data are available. If data are not available, the plan discusses when HUD expects to develop the baselines. For example, the performance goal and measure to increase the share of recipients of welfare-to-work vouchers who hold jobs at the time of annual recertification indicates that the baseline for households receiving vouchers in fiscal year 2000 will be determined in fiscal year 2001. The fiscal year 2000 annual performance plans show inconsistent attention to the need to resolve the mission-critical management challenges and program risks that continue to undermine the federal government’s economy, efficiency, and effectiveness. These challenges and risks must be addressed as part of any serious effort to fundamentally improve the performance of federal agencies. In our assessment of the fiscal year 1999 performance plans, we observed that the value of the plans could be augmented if they more fully included goals that addressed mission-critical management issues. We noted that precise and measurable goals for resolving mission-critical management problems are important to ensuring that the agencies have the institutional capacity to achieve their more results-oriented programmatic goals. In assessing the fiscal year 2000 plans, we looked at whether the plans address over 300 specific management challenges and program risks identified by us and the agencies’ Inspectors General. Many of these challenges and risks are long-standing, well known, and have been the subject of close congressional scrutiny. They include, most prominently, federal operations that we have identified as being at among the highest risk for waste, fraud, abuse, and mismanagement. We found that agencies do not consistently address management challenges and program risks in their fiscal year 2000 performance plans. In those cases where challenges and risks are addressed, agencies use a variety of approaches, including setting goals and measures directly linked to the management challenges and program risks, establishing goals and measures that are indirectly related to the challenges and risks, or laying out strategies to address them. Figure 4 illustrates the distribution of these various approaches among the management challenges and program risks we identified. Agencies’ fiscal year 2000 plans contain goals and measures that directly address about 40 percent of the identified management challenges and program risks. For example, the Department of Energy’s (DOE) plan contains goals and measures that are designed to address its major management challenges and program risks. DOE’s contract management is one of the areas on our high-risk list, and this is especially important because DOE relies on contractors to perform about 90 percent of its work. Under DOE’s corporate management goal, one objective is to improve the delivery of products and services through contract reform and the use of businesslike practices. The strategies DOE identifies include using prudent contracting and business management approaches that emphasize results, accountability, and competition. DOE’s plan also contains three specific measures addressing contract reform. One of these measures is to convert one support services contract at each major site to become a performance-based service contract using government standards. On the other hand, agencies’ plans do not contain goals, measures, or strategies to resolve one-fourth of the management challenges and program risks we identified. In Treasury’s plan, for example, IRS has no goals, measures, or strategies to address several of the high-risk areas we have identified, even though important management reform initiatives are under way across the agency. Specifically, Treasury’s plan does not address internal control weaknesses over unpaid tax assessments (We found that the lack of a subsidiary ledger impairs IRS’ ability to effectively manage its unpaid assessments. This weakness has resulted in IRS’ inappropriately directing collection efforts against taxpayers after amounts owed have been paid.); the need to assess the impact of various efforts IRS has under way to the need to improve security controls over information systems and address weaknesses that place sensitive taxpayer data at risk to both internal and external threats (Our high-risk update reported that IRS’ controls do not adequately reduce vulnerability to inappropriate disclosure.); and weaknesses in internal controls over taxpayer receipts. (Specifically, there is no discussion of IRS’ plans to strengthen efforts to ensure that taxpayer receipts are securely transported, such as prohibiting the use of bicycle or other unarmed vehicle couriers. Our high-risk update pointed out that IRS’ controls over tax receipts do not adequately reduce their vulnerability to theft.) Treasury’s plan would be more informative if it captured IRS’ reform efforts and delineated goals and performance measures and, if necessary, developed interim measures to show IRS’ intended near-term progress toward addressing its high-risk operations. For about 18 percent of the over 300 management challenges and program risks we identified, agencies have established annual performance goals that appear to indirectly address these issues. For example, while SSA paid over $73 billion in 1998 in cash benefits to nearly 11 million blind and disabled beneficiaries, we found that SSA’s complex process for determining whether an individual qualifies for disability benefits has been plagued by a number of long-standing weaknesses. SSA’s disability benefit claims process is time-consuming and expensive, and SSA’s disability caseloads have grown significantly in the past decade. On the basis of our ongoing review of SSA’s disability claims process redesign effort, we found that SSA has not been able to keep its redesign activities on schedule or demonstrate that its proposed changes will significantly improve its claims process. Further, we found that few people have left the disability rolls to return to work. Although SSA’s plan does not include any direct goals or measures for its disability redesign efforts, it does include an intermediate goal for fiscal year 2000 to increase the number of Disability Insurance recipients and Supplemental Security Income recipients transitioning into the workforce by 10 percent over fiscal year 1997 levels. Finally, agencies identify strategies to help them meet the challenges and risks they confront, rather than setting goals and measures in their performance plans. The plans we reviewed contain strategies to address about 18 percent of the identified challenges and risks. For some agencies, these strategies are clearly and directly related to the agency’s efforts to address a specific challenge or risk. For example, DOT’s lack of controls over its financial activities impairs the agency’s ability to manage programs and exposes the department to potential waste, fraud, mismanagement, and abuse. DOT’s fiscal year 2000 performance plan identifies financial accounting as a management challenge and addresses key weaknesses that need to be resolved before DOT can obtain an unqualified audit opinion on its fiscal year 2000 financial audit. DOT’s corporate management strategies include efforts to (1) receive an unqualified audit opinion on the department’s fiscal year 2000 consolidated financial statement and stand- alone financial statements, (2) enhance the efficiency of the accounting operation consistent with increased accountability and reliable reporting, and (3) implement a pilot of the improved financial systems environment in at least one operating administration. In other cases, however, it is unclear to what extent the strategies that agencies identify in their fiscal year 2000 annual performance plans will address the management challenges and program risks. Labor’s Inspector General has found, for example, that the department faces serious vulnerabilities within three major worker benefit programs. These program risks include the continued proliferation of unemployment insurance fraud schemes and the escalating indebtedness of the Black Lung Disability Trust Fund. Labor did not develop any performance goals to specifically address these vulnerabilities, and although its plan broadly discusses these concerns, the plan shows that Labor will rely, for example, on the Inspector General’s investigations to help identify and investigate multistate fraud schemes. Labor did not address efforts to reduce the indebtedness of the Black Lung Disability Trust Fund. Similarly, another challenge the Inspector General identified is Labor’s need to ensure that weaknesses, vulnerabilities, and criminal activity are identified and addressed. Here again, Labor’s plan indicates that it will rely on the Inspector General investigations to address this challenge. Because the Inspector General has already identified these management challenges and program risks, it is unclear whether relying on further Inspector General investigations will be a sufficient strategy to systematically address the vulnerabilities that have been identified across several Labor programs. The fiscal year 2000 performance plans indicate that the federal government continues to make progress in showing that crosscutting efforts are being coordinated to ensure effective and efficient program delivery. Among the improvements in the fiscal year 2000 plans over what we observed in the fiscal year 1999 plans are further identification of crosscutting efforts and more inclusive listings of other agencies with which responsibility for those efforts are shared. However, similar to the situation with the 1999 plans, few agencies have attempted the more challenging task of establishing complementary performance goals, mutually reinforcing strategies, and common performance measures, as appropriate. The effective and efficient coordination of crosscutting programs is important because our work has suggested that mission fragmentation and program overlap are widespread. We have identified opportunities for improving federal program coordination in vital national mission areas covering counterterrorism agriculture, community and regional development, health, income security, law enforcement, international affairs, and other areas. Our work has found that uncoordinated federal efforts confuse and frustrate program recipients, waste scarce resources, and undermine the overall effectiveness of the federal effort. SSA and VA improved their fiscal year 2000 plans over their fiscal year 1999 plans by linking their performance goals and objectives to crosscutting program efforts. SSA, under its goal “to make SSA program management the best-in-business, with zero tolerance for fraud and abuse,” lists 14 crosscutting areas of coordination, including information sharing with the Department of Health and Human Services’ (HHS) Health Care Financing Administration to help SSA determine Medicaid eligibility. Similarly, VA’s fiscal year 2000 plan briefly describes an extensive array of crosscutting activities and explicitly associates applicable crosscutting activities with each key performance goal, whereas the fiscal year 1999 plan was limited to listings of other entities with crosscutting interests. Although most agencies have shown at least some improvement in their identification of crosscutting program efforts, the Department of Defense (DOD) and DOE continue to provide little information about the substantive work of interagency coordination that is taking place. For example, we found that the federal government’s effort to combat terrorism—an effort that cost about $6.7 billion in fiscal year 1997—was among the significant crosscutting programs for which DOD failed to discuss the details of coordination with other involved agencies in both its fiscal years 1999 and 2000 plans. This failure is important because, as we recently testified, opportunities continue to exist to better focus and target the nation’s investments in combating terrorism and better ensure that the United States is prioritizing its funding of the right programs in the right amounts. Similarly, DOE’s fiscal year 2000 plan does not show other agencies’ programs that contribute to results that DOE is also trying to achieve. This plan’s “means and strategies” section, under the business line of Science and Technology, provides one example. In this discussion, DOE does not identify any federal agency, such as the National Science Foundation (NSF), that may contribute to similar science and technology results. In contrast, under its goal of “discoveries at and across the frontier of science and engineering,” NSF’s plan identifies research facilities supported by both NSF and DOE, including the Large Haldron Collider in Switzerland. Few agencies have moved beyond identification of crosscutting efforts and strategies to include in their plans complementary performance goals to show how different program strategies are mutually reinforcing. We noted in our assessment of the fiscal year 1999 plans that an agency could increase the usefulness of its performance plan to congressional and other decisionmakers by identifying the results-oriented performance goals that involve other agencies and by showing how the agency contributes to the common result. Although incomplete, the efforts of DOT and HHS show how such an approach can provide valuable perspective to decisionmakers. For example, DOT’s fiscal year 2000 performance plan indicates goals and performance measures to be used mutually to support crosscutting programs. The plan states that the Federal Aviation Administration and the National Aeronautics and Space Administration (NASA) have complementary performance goals to decrease by 80 percent the rate of aviation fatalities by the year 2007. However, the plan could be improved by describing how the strategies of the two agencies are mutually reinforcing. HHS also provides valuable perspective to decisionmakers by linking complementary performance goals of agencies within the department. Those linkages suggest how differing program strategies can be mutually reinforcing. For example, one of HHS’ strategic objectives is to reduce tobacco use, especially among the young. To contribute to this objective, the Centers for Disease Control and Prevention has a performance goal to reduce the percentage of teenagers who smoke by conducting education campaigns, providing funding and technical assistance to state programs, and working with nongovernmental entities. The Food and Drug Administration (FDA) has a complementary goal to reduce the easy access to tobacco products and eliminate the strong appeal of these products for children by conducting 400,000 compliance checks and selecting certain sites to target for intensified enforcement efforts to determine the effectiveness of different levels of effort. HHS can build upon intradepartmental efforts by aligning its performance goals with those of other federal agencies, such as the Departments of Justice and Education. While still uncommon, useful performance plans not only identify crosscutting efforts, they also describe how agencies expect to coordinate efforts with other agencies that have similar responsibilities. Plans that more directly explain strategies and tools for interagency coordination will be most helpful to Congress as it assesses the degree to which those strategies and tools are appropriate and effective and seeks best practices for use in other program areas. By way of illustration, FDA has a goal to develop and make available an improved method for the detection of several foodborne pathogens. FDA’s discussion of this goal refers to an interagency research plan that seeks to more effectively coordinate the food safety research activities of FDA and the Department of Agriculture (USDA). FDA’s discussion of joint planning, one approach to interagency coordination, demonstrates how annual performance plans can be used to develop a base of governmentwide information on the strengths and weaknesses of various coordination approaches and tools—as we suggested in our review of the fiscal year 1999 plans. Other plans, such as those of VA, SSA, and the Nuclear Regulatory Commission (NRC), also discuss coordination tools, such as cooperative training, partnerships, memorandums of understanding, bilateral agreements, and interagency task forces. Most fiscal year 2000 plans provide a general discussion—with DOT’s being the clearest—of the strategies and resources that the agency will use to achieve results. Thus, similar to other aspects of performance plans, substantial opportunities exist to make continued improvements in presentations of strategies and resources. To assess the degree to which an agency’s plan provides a specific discussion of strategies and resources the agency will use to achieve performance goals, we examined whether it includes (1) budgetary resources related to the achievement of performance goals; (2) strategies and programs linked to specific performance goals and descriptions of how the strategies and programs will contribute to the achievement of those goals; (3) a brief description or reference to a separate document of how the agency plans to build, maintain, and marshal the human capital needed to achieve results; and (4) strategies to leverage or mitigate the effects of external factors on the accomplishment of performance goals. Figure 5 shows the results of our assessment of the 24 agencies. We categorized each agency’s plan based on the degree to which it collectively addressed the four practices presented above. Like the fiscal year 1999 plans, most of the fiscal year 2000 plans do not consistently show how program activity funding would be allocated to agencies’ performance goals. However, individual agencies show progress in making useful linkages between their budget requests and performance goals, as we will detail in a companion letter to this report. Such progress is important because a key objective of the Results Act is to help Congress develop a clearer understanding of what is being achieved in relation to what is being spent. The Act requires that annual performance plans link performance goals to the program activities in agencies’ budget requests. The most informative plans would translate these linkages into budgetary terms—that is, they would show how funding is being allocated from program activities to discrete sets of performance goals. For example, SSA’s fiscal year 1999 performance plan noted that the agency’s Limitation on Administrative Expenses (LAE) account supported most of the measures in the plan. However, beyond that acknowledgement, SSA provided few details as to how budget resources would actually be allocated to support its performance goals. As a means of communicating its efforts to link budget resources to stated goals, the fiscal year 2000 plan now includes a matrix of SSA’s fiscal year 2000 administrative budget accounts by related strategic goal. For example, the matrix shows that SSA has determined that it will require $38 million to meet its strategic goal of “promoting responsive programs” and that this amount will come out of SSA’s LAE and Extramural Research accounts. As we noted in reviewing fiscal year 1999 performance plans, agencies used a variety of techniques to show relationships between budgetary resources and performance goals. Plans contain crosswalks to help identify how much funding would be needed to support discrete sets of performance goals and where that funding was included in the agency’s budget request. For example, the U.S. Geological Survey portion of the Department of the Interior’s fiscal year 2000 plan provides crosswalks showing (1) the relationship between funding for its budget program activities and funding for its “GPRA program activities” and (2) how “GPRA program activity” funding would be allocated to performance goals. In contrast, some agencies could have used such crosswalks to make their presentations more relevant for budget decisionmaking. For example, Commerce’s plan identifies requirements of $133.2 million to achieve the International Trade Administration’s (ITA) strategic goal of increasing the number of small business exporters. However, it is not clear how this funding level was derived from the budget activities or accounts in ITA’s budget request. In addition to providing crosswalks, some agencies also made performance information useful for resource allocation decisions by including this information in the budget justification of estimates traditionally sent to Congress in support of their requests. For example, NRC integrates its budget justification and performance plan for the first time in fiscal year 2000 as part of a broader initiative to integrate its planning, budgeting, and performance management process. Information traditionally contained in a budget justification, such as descriptions of accounts and their funding, was combined with performance information in such a way that the NRC budget justification and its plan could not be separated. Although no agency made significant changes to its account or program activity structure in fiscal year 2000 in order to clarify or simplify relationships between program activities and performance goals, some agencies mention the possibility of future change. For example, we have previously noted that VA’s program activities do not clearly align with the agency’s performance goals. In its fiscal year 2000 plan, VA states that it is working with OMB to develop a budget account restructuring proposal. Most of the fiscal year 2000 plans we reviewed relate strategies and programs to performance goals. However, few plans indicate how the strategies will contribute to accomplishing the expected level of performance. Discussions of how the strategies will contribute to results are important because they are helpful to congressional and other decisionmakers in assessing the degree to which strategies are appropriate and reasonable. Such discussions also are important in pinpointing opportunities to improve performance and reduce costs. As an example, DOT’s performance plan provides a specific discussion of the strategies and resources that the department will use to achieve its performance goals. For each performance goal, the plan lists an overall strategy that often clearly conveys the relationship between the strategy and the goal for achieving it, as well as specific activities and initiatives to be undertaken in fiscal year 2000. For instance, DOT expects to increase transit ridership through investments in transit infrastructure, financial assistance to metropolitan planning organizations and state departments of transportation for planning activities, research on improving train control systems, and fleet management to provide more customer service. NSF’s performance plan also presents strategies that clearly show how NSF plans to achieve its fiscal year 2000 performance goals. Specifically, the plan describes the general strategies that NSF intends to use to achieve its performance goals for the results of scientific research and education and for most of its performance goals for the NSF investment process and management. To illustrate, NSF will use a competitive merit-based review process with peer evaluations to identify the most promising ideas from the strongest researchers and educators. According to its plan, NSF will work toward the outcome goal of “promoting connections between discoveries and their use in service to society” by using the merit review process to make awards for research and education activities that will rapidly and readily feed into education, policy development, or work of other federal agencies or the private sector. On the other hand, some agencies do not adequately discuss how strategies and programs contribute to results. For example, Labor identifies in its plan 112 means and strategies to accomplish its 42 performance goals and links each strategy to a specific performance goal. However, in some instances, the strategies do not identify how they would help achieve the stated goals. For example, one performance goal states that 60 percent of local employment and training offices will be part of one-stop career center systems. In a related strategy, Labor states that it will “continue its support of the adoption and implementation of continuous improvement initiatives throughout the workforce development system,” but does not indicate how these efforts will help achieve the performance goal. In some cases, strategies are not provided. For example, HHS’ Administration for Children and Families (ACF) has a goal to provide children permanency and stability in their living situations, and related performance measures, such as increasing the percentage of children who are adopted within 2 years of foster care placement. However, ACF does not identify the strategies that it will rely on to achieve this goal. While agencies’ fiscal year 2000 plans show progress in relating programs and strategies to goals, few relate the use of capital assets and management systems to achieving results. Although a majority of the agencies discuss mission-critical management systems in their fiscal year 2000 performance plans—such as financial management, procurement and grants management, and other systems—few describe how the systems will support the achievement of program results or clearly link initiatives to individual goals or groups of goals. Addressing information technology issues in annual performance plans is important because of technology’s critical role in achieving results, the sizable investment the federal government has made in information technology (about $145 billion between 1992 and 1997), and the long- standing weaknesses in virtually every agency in successfully employing technology to further mission accomplishment. The vital role that information technology can play in helping agencies achieve their goals was not clearly described in agency plans. The failure to recognize the central role of technology in achieving results is a cause of significant concern because, under the Paperwork Reduction and Clinger-Cohen Acts, Congress put in place clear statutory requirements for agencies to better link their technology plans and information technology use to their missions and programmatic goals. SSA’s fiscal year 2000 plan provides a series of brief descriptions of key technology initiatives such as its Intelligent Workstation and Local Area Network (IWS/LAN), which is at the center of SSA’s redesign of its core business processes. However, the plan does not clearly link the IWS/LAN initiative to any goals necessary to determine its impact on workload productivity, processing times, or the accuracy rates of decisions. Considering that prior plans have stated that SSA’s strategic goals are essentially unachievable unless SSA invests wisely in information technology, such as IWS/LAN, a clearer, more-direct link between technology initiatives and the program results they are meant to support would enhance the usefulness of the plan. On the other hand, USDA’s performance plan, which is made up of USDA component plans, frequently explains how proposed capital assets and management systems will support the achievement of program results. For example, the plan for the Agricultural Marketing Service, a component of USDA, describes how a proposed funding increase will provide for the modernization and the replacement of its Processed Commodities Inventory Management System. This system supports such activities as planning, procurement, and accounting for more than $1 billion of domestic and $562 million of foreign commodities annually. The plan further notes that studies have indicated that a modernized system will generate significant efficiency improvements and considerable cost savings. Most of the fiscal year 2000 annual performance plans do not sufficiently address how the agencies will use their human capital to achieve results. Specifically, few of the plans relate—or reference a separate document that relates—how the agency will build, marshal, and maintain the human capital needed to achieve its performance goals. This suggests that one of the central attributes of high performing organizations—the systematic integration of mission and program planning with human capital planning—is not being effectively addressed across the federal government. The general lack of attention to human capital issues is a very serious omission because only when the right employees are on board and provided the training, tools, structure, incentives, and accountability to work effectively is organizational success possible. Although the plans often discuss human capital issues in general terms, such as recruitment and training efforts, they do not consistently discuss other key human capital strategies used by high-performing organizations. For example, SBA’s plan discusses its need to “transition” and “reshape” its workforce to become a 21st century leading edge institution and the agency’s intention to spend $3 million to train its staff in the skills needed to meet its mission. However, the plan does not discuss the types of human resources skills needed to achieve SBA’s fiscal year 2000 performance goals or the types of training to be provided to help ensure that SBA’s staff have the needed skills. As another example, NRC’s plan uses a table to show the funds and staff that it requested for the 13 programs that constitute the nuclear reactor safety strategic arena. Although NRC provides some information on the recruitment, training, and use of staff, it does not discuss the knowledge, skills, and abilities needed to achieve results. Such a discussion would be particularly helpful since NRC has been downsizing in response to congressional pressure and our prior work has shown several federal agencies’ downsizing efforts were not well-planned and contributed to staff shortages and skills gaps in critical areas. Unlike most plans, VA’s fiscal year 2000 performance plan provides an example of how a human capital initiative is tied to, and necessary for, achieving performance goals. VA’s plan identifies performance goals to increase compensation claim processing accuracy and to reduce claim- processing time. VA’s performance plan notes that the Veterans Benefits Administration (VBA) will need to hire and train additional employees to replace a sizable portion of the compensation and pension claims processing workforce who will become eligible for retirement within 5 years. According to its performance plan, to train these new employees as well as existing employees, VBA is developing training packages using instructional systems development methodology and will measure training effectiveness through performance-based testing, which is intended to lead to certification of employees. High-performing organizations seek to align employee performance management with organizational missions and goals. Our prior work looking at early Results Act implementation efforts found that linking employee performance management to results is a substantial and continuing challenge for agencies. The plans for DOT and VA provide valuable discussions of the approaches those agencies are using to “contract” with senior managers for results. Such discussions are informative because they clearly show the agency’s commitment to achieving results and provide a basis for lessons learned and best practices for other agencies to consider. DOT’s plan notes that the department has incorporated all of its fiscal year 1999 performance goals into performance agreements between administrators and the Secretary. At monthly meetings with the Deputy Secretary, the administrators are to report progress toward meeting these goals and program adjustments that may be undertaken throughout the year. VHA, a component of VA, also uses a performance contracting process whereby the Under Secretary for Health negotiates performance agreements with all of VHA’s senior executives. These performance agreements focus on 15 quantifiable performance targets. In addition, executives are held accountable for achieving goals pertaining to workforce diversity, labor-management partnerships, and staff education and training. Plans are under way to extend the performance contract approach throughout VHA. Unlike the fiscal year 1999 plans, the majority of the fiscal year 2000 performance plans identify external factors that could affect achievement of strategic and performance goals. However, far fewer agencies discuss the strategies they will use to leverage or mitigate the effects of identified external factors. Such discussions can help congressional and other decisionmakers determine if the agency has the best mix of program strategies in place to achieve its goals or if additional agency or congressional actions are needed to achieve results. For example, Commerce’s plan identifies many of the external factors that could affect the Patent and Trademark Office’s (PTO) ability to achieve its four strategic goals, but the plan does not clearly describe or indicate how PTO will mitigate the effect of these factors. Under PTO’s strategic goal to “grant exclusive rights, for limited times, to inventors for their discoveries,” the plan states that the patent business’ workload is dependent on foreign economies because about 50 percent of patent applications are from overseas. The plan recognizes that changes in foreign economies could impact PTO’s workload and affect its revenue, but it does not indicate how PTO would adjust to any changes in incoming patent applications from these countries. An agency that improved in this area over last year is USDA’s Grain Inspection, Packers and Stockyards Administration (GIPSA). In its fiscal year 1999 plan, GIPSA did not identify any external factors; however, in its fiscal year 2000 plan, it identifies several important external factors and provides mitigation strategies to address them. For example, GIPSA plans to increase the efficiency of grain marketing by streamlining grain inspection and weighing processes and by providing objective measures of, among other things, grain quality. The majority of the fiscal year 2000 performance plans we reviewed provide only limited confidence that performance information will be credible, and agencies need to make substantial progress in this area. Only the plans for Education, Justice, DOT, and SSA provide even general confidence that their performance information will be credible To assess the degree to which an agency’s plan provides confidence that the agency’s performance information will be credible, we examined whether it describes (1) efforts to verify and validate performance data, and (2) data limitations, including actions to compensate for unavailable or low-quality data and the implications of data limitations for assessing performance. Figure 6 shows the results of our assessment of the 24 agencies. We categorized each agency’s plan based on the degree to which it collectively addressed the two practices presented above. Like the fiscal year 1999 performance plans, most of the fiscal year 2000 performance plans lack information on the actual procedures the agencies will use to verify and validate performance information. Congressional and executive branch decisionmakers must have assurance that the program and financial data being used will be sufficiently timely, complete, accurate, useful, and consistent if these data are to inform decisionmaking. Furthermore, in some cases, data sources are not sufficiently identified. For example, the Department of State’s performance plan includes data sources that are sometimes vaguely expressed as “X report” or “Bureau X records.” Also, SBA identifies sources and means to validate performance data typically with one or two word descriptors, such as “publications” or “SBA records.” Moreover, few agencies provide explicit discussions of how they intend to verify and validate performance data. For example, some of the verification processes described in HHS’ Substance Abuse and Mental Health Services Administration’s (SAMHSA) performance plan do not provide confidence in the credibility of its performance information. Regarding the validity of data that will be used to measure progress in offering outreach services to homeless and mentally ill persons, SAMHSA states “ince the sources of the data are the local agencies that provide the services, the quality of the data is very good.” SAMHSA appears to be assuming that these data are valid without indicating whether it plans to verify the quality of the data or that it has conducted prior studies that confirm the basis for SAMSHA’s confidence. Similarly, the performance plan of the Rural Utilities Service (RUS), a component of USDA, contains a limited discussion of the verification and validation of data relating to goals and measures for its electric program. The RUS plan states that (1) the relevant data are available in records from RUS’ automated systems, RUS’ borrower-reported statistics, and USDA’s Economic Research Service (ERS); (2) RUS has had long experience with its internal data and is highly confident of its accuracy; and (3) it considers ERS’ data to be very reliable. RUS, however, does not discuss the basis for its confidence in its or ERS’ data accuracy and reliability. On the other hand, a few agencies incorporated in their performance plans a discussion of procedures to verify and validate data. These procedures include external reviews, standardization of definitions, statistical sampling, and Inspector General quality audits. For example, VA is taking steps to validate measurement systems; developing processes for staff and independent consultants to examine methodologies; having models reviewed by expert panels; and obtaining independent evaluations from nationally recognized experts to review methods of data collection, statistical analysis, and reporting. The plan states that external reviews are essential in order to help depoliticize issues related to data validity and reliability. Also, Education describes working with the National Postsecondary Education Cooperative to improve the efficiency and usefulness of data reported on postsecondary education by standardizing definitions of key variables, avoiding duplicate data requests, and increasing the level of communications between the major providers and users of postsecondary data. Also, the plan outlines a 5-year strategy to streamline and benchmark the collection of elementary and secondary program data. The goal of this system is to provide accurate, comparable information about federal program results to all program participants. Education also plans to work with its Inspector General to independently monitor the reliability of its data quality in high priority areas, such as student financial aid. Similar to our findings with the fiscal year 1999 performance plans, we found that, in general, the fiscal year 2000 performance plans do not include discussions of strategies to address known data limitations. When performance data are unavailable or of low quality, a performance plan would be more useful to decisionmakers if it briefly discussed how the agency plans to deal with such limitations. Moreover, discussions of the challenges that an agency faces in obtaining high-quality performance data is helpful to decisionmakers in determining the implications for assessing the subsequent achievement of performance goals. For example, HHS’ ACF performance plan notes that, in the area of child support enforcement, not all states have certified statewide automated systems and some states still maintain their data manually. Additionally, the agency’s Office of Child Support Enforcement has reported that, where these systems are not in place, problems of duplication and missing information could result. Yet, the plan does not discuss the actions ACF will take to compensate for possibly unreliable data. The Environmental Protection Agency’s performance plan describes the databases used for tracking compliance with requirements under the Safe Drinking Water Act and the Clean Water Act, and the quality assurance and quality control programs, to ensure the accuracy and reliability of these databases. Nevertheless, a number of states have challenged the compliance information in the database for Safe Drinking Water. Although the agency has acknowledged the problem and undertaken a major effort to address it, this data limitation was not discussed in the plan. Thus, decisionmakers are not provided with context that would be helpful in considering whether the agency will be able to confidently report on the degree to which it has achieved its goals. On the other hand, DOT’s performance plan provides important context for decisionmakers by including a good discussion of data limitations and, in particular, the implications of those limitations for the quality of the data. For example, the plan defines the performance measure for maritime oil spills—gallons spilled per million gallons shipped—as only counting spills of less than one million gallons from regulated vessels and waterfront facilities and not counting other spills. The plan further explains that a limitation to the data is that it may underreport the amount spilled because it excludes nonregulated sources and major oil spills. However, it explains that large oil spills are excluded because they occur rarely and, when they do occur, would have an inordinate influence on statistical trends. The plan also explains that measuring only spills from regulated sources is more meaningful for program management. A few performance plans provide information on how agencies are working to improve the availability and quality of their data. For example, the U.S. Agency for International Development (USAID) indicates that it is seeking to find ways to improve data quality for some of its performance indicators. For its goal of reducing by 10 percent the number of deaths due to infectious diseases of major public health importance by 2007, USAID reports that no data are available on a country-specific basis and that it will be working with the World Health Organization to collect such data by 2002. In other instances, USAID indicates that it will seek to ensure collection of relevant data by conducting periodic surveys in USAID- assisted countries. Federal decisionmakers must have reliable and timely performance and financial information to ensure adequate accountability, manage for results, and make timely and well-informed judgments. Unfortunately, historically, such information has not been available, and agencies’ and Inspector General reports, as well as our own work, have identified a series of persistent limitations in the availability of quality financial data for decisionmaking. Without reliable data on costs, decisionmakers cannot effectively control and reduce costs, assess performance, and evaluate programs. Under the CFO Act, agencies are expected to fill this gap by developing and deploying more modern financial management systems and routinely producing sound cost information. Toward that end, the 24 agencies covered by the CFO Act have been required to prepare annual audited financial statements since fiscal year 1996. These audits have shown how far many agencies have to go to generate reliable year-end information. Table 1 shows the status of audit opinions for the 24 CFO Act agencies for fiscal year 1998 as of June 30, 1999. For some agencies, the preparation of financial statements requires considerable reliance on ad hoc programming and analysis of data produced by inadequate financial management systems that are not integrated or reconciled, and that often require significant adjustments. While obtaining unqualified “clean” audit opinions on federal financial statements is an important objective, it is not an end in and of itself. The key is to take steps to continuously improve internal controls and underlying financial and management information systems as a means to ensure accountability, increase the economy, improve the efficiency, and enhance the effectiveness of government. These systems must generate timely, accurate, and useful information on an ongoing basis, not just as of the end of the fiscal year. The overarching challenge in generating timely, reliable data throughout the year is overhauling financial and related management information systems. More fundamentally, the Federal Financial Management Improvement Act of 1996 (FFMIA) requires that agency financial management systems comply with (1) financial systems requirements, (2) federal accounting standards, and (3) the U.S. Government Standard General Ledger at the transaction level. At the time of our report, financial statement audits for fiscal year 1998 had been completed on 20 of the 24 CFO Act agencies. Of those 20, financial management systems for 17 agencies were found by auditors to be in substantial noncompliance with FFMIA’s requirements. The three agencies in compliance were DOE, NASA, and NSF. Examples of reported problems at several agencies are discussed below. Financial audits at several Commerce bureaus continue to disclose serious data reliability problems. The performance plan does not acknowledge the performance implications of its financial management and consolidated financial statement problems or delays in implementing its new Consolidated Administrative Management System. However, Commerce’s performance plan discusses a request for a $2.1 million increase in funding to (1) target specific problems, ensure the integrity of the department’s financial statements, and achieve an unqualified financial audit opinion across the department and (2) help provide an integrated financial management system to comply with federal accounting requirements. DOD’s plan acknowledges that data for certain measures and indicators come from financial and accounting systems that have experienced problems. However, as we have reported, long-standing weaknesses in DOD’s financial management operations undermine DOD’s ability to effectively manage it vast operations, limit the reliability of financial information provided to Congress, and continue to result in wasted resources. In addition, we recently reported that USAID’s internal accounting and information systems do not have the capacity to generate reliable data to support its performance plan and to produce credible performance reports. USAID’s financial management system does not meet the federal financial management systems requirements, and material weaknesses in internal controls impair the integrity of its financial information. The agency has indicated that it is committed to developing a financial management system that will meet federal standards, but the USAID Inspector General recently reported that the agency has made only limited progress in correcting its system deficiencies. Agencies can continue to build on the progress that has been made over the last year in improving the performance plans by focusing their efforts on five key areas that offer the greatest opportunities for continuing improvements. These areas—which we identified in assessing last year’s plans—include (1) better articulating a results orientation, (2) coordinating crosscutting programs, (3) showing the performance consequences of budget decisions, (4) clearly showing how strategies will be used to achieve results, and (5) building the capacity within agencies to gather and use performance information. Better articulating a results orientation. The fiscal year 2000 plans provide a general picture of agencies’ intended performance. Each of the plans contains at least some results-oriented goals and related performance measures, and many of the plans contain informative baseline and trend data. Nonetheless, continuing opportunities exist to more consistently articulate a results orientation. Some agencies have used multiyear and intermediate goals to provide clearer pictures of intended performance. Likewise, plans with goals and strategies that address mission-critical management challenges and program risks show that agencies are striving to build the capacity to be high-performing organizations and reduce the risk of waste, fraud, abuse, and mismanagement. Coordinating crosscutting programs. Interagency coordination is important for ensuring that crosscutting program efforts are mutually reinforcing and efficiently implemented. While agencies continue to make progress, the substantive work of coordination would be evident if performance plans more often contained complementary performance goals, mutually reinforcing strategies, and common or complementary performance measures. Also not yet widespread are discussions of how crosscutting program efforts are being coordinated. Crosscutting programs, by definition, involve more than one agency, and coordination therefore requires the ability to look across agencies and ensure that the appropriate coordination is taking place. Given OMB’s position in the executive branch, its leadership is particularly important in addressing this issue. Showing the performance consequences of budget decisions. Some agencies have begun to develop useful linkages between their performance plans and budget requests. However, persistent challenges in performance measurement and deficiencies in cost accounting systems continue to hamper such efforts. The progress that has been made, the challenges that persist, and Congress’ interest in having credible, results-oriented information for making resource allocation decisions underscore the importance of continued improvement in showing the performance consequences of budgetary decisions. In a previous report, we recommended that the Director of OMB assess the approaches agencies are using to link performance goals to the program activities of their budget requests. We further recommended that OMB work with agencies and Congress to develop a constructive and practical agenda to further clarify the relationship between budgetary resources and results. Clearly showing how strategies will be used to achieve results. While agencies’ fiscal year 2000 plans contain valuable and informative discussions of how strategies and programs relate to goals, additional progress is needed in explaining how strategies and programs will be used to achieve results. Specifying clearly in performance plans how strategies are to be used to achieve results is important to managers and other decisionmakers in order to determine the right mix of strategies, that is, one which maximizes performance while limiting costs. We also found that most fiscal year 2000 performance plans do not sufficiently address how the agency will use its human capital to achieve results. This lack of attention to human capital issues suggests that much more effort is needed to integrate program performance planning and human capital planning. More generally, linking the use of capital assets and management systems to results still is not consistently being done. Building the capacity within agencies to gather and use performance information. In order to successfully measure progress toward intended results, agencies need to build the capacity to gather and use performance information. However, most of the agencies’ fiscal year 2000 performance plans provide limited confidence in the credibility of the information that is to be used to assess agencies’ progress toward achieving results. Many plans lack specific detail on the actual procedures the agencies will use to verify and validate performance information, and there are few discussions of known data limitations, such as unavailable or low-quality data, and strategies to address these limitations. We recommend that the Director of OMB ensure that executive agencies make continued progress in improving the usefulness of performance planning for congressional and executive branch decisionmaking. As discussed above, in our assessment of the fiscal year 1999 performance plans, we suggested five key improvement opportunities that provide an ongoing agenda for improving the usefulness of agencies’ performance plans. In assessing the fiscal year 2000 plans, we identified important opportunities for continuing improvements in agencies’ plans in each of those five areas: Better articulating a results orientation, with particular attention to ensuring that performance plans show how mission-critical management challenges and program risks will be addressed. Coordinating crosscutting programs, with particular attention to demonstrating that crosscutting programs are taking advantage of opportunities to employ complementary performance goals, mutually reinforcing strategies, and common or complementary performance measures, as appropriate. Showing the performance consequences of budget and other resource decisions. Clearly showing how strategies will be used to achieve results, with particular attention to integrating human capital and program performance planning. Building the capacity within agencies to gather and use performance information, with particular attention to ensuring that agencies provide confidence that performance information will be credible. Continued improvements in agencies’ plans should help Congress in building on its recent and ongoing use of performance plans to help inform its own decisionmaking. In that regard, we have long advocated that congressional committees of jurisdiction hold augmented oversight hearings on each of the major agencies at least once each Congress and preferably on an annual basis. Information on missions, goals, strategies, resources, and results could provide a consistent starting point for each of these hearings. Such hearings also will further underscore for agencies the importance that Congress places on creating high-performing executive organizations. Performance planning under the Results Act should allow for more informed discussions about issues such as: Whether the agency is pursuing the right goals and making progress toward achieving them. Whether the federal government is effectively coordinating its responses to pressing national needs. Whether the federal government is achieving an expected level of performance for the budgetary and other resource commitments that have been provided. The degree to which the agency has the best mix of programs, initiatives, and other strategies to achieve results. The progress the agency is making in addressing mission-critical management challenges and program risks. The efforts underway to ensure that the agency’s human capital strategies are linked to strategic and programmatic planning and accountability mechanisms. The status of the agency’s efforts to use information technology to achieve results. On July 1, 1999, we provided a draft of this letter to the Director of OMB for comment. We did not ask the Director to comment on the agency appendixes because those appendixes were drawn from our individual reviews of the fiscal year 2000 performance plans, on which the agencies were asked to comment. As indicated in each of the appendixes, the complete text of our observations and agencies’ comments on those observations are included on the Internet. On July 12, 1999, a responsible OMB senior staff member stated that the agency did not have any comments on this report. As agreed, unless you announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of this report to Senator Joseph I. Lieberman, Representative Richard A. Gephardt, and Representative Henry A.Waxman in their respective capacities as the Ranking Minority Member of the Senate Committee on Governmental Affairs, Minority Leader of the House of Representatives, and Ranking Miniority Member of the House Committee on Government Reform. We are also sending copies to the Honorable Jacob J. Lew, Director of OMB, and will make copies available to others on request. The major contributors to this report are acknowledged in appendix XXVI. If you have any questions about this report or would like to discuss it further, please contact J. Christopher Mihm on (202) 512-8676. To summarize our observations on agencies’ fiscal year 2000 performance plans and to identify the degree of improvement over the fiscal year 1999 plans, we analyzed the information contained in our observations of the 24 individual CFO Act agencies’ performance plans. Consistent with our approach last year in reviewing the fiscal year 1999 annual plans, our reviews of each of the agencies’ performance plans and our summary analysis of the 24 plans were based on criteria from our evaluator’s guide and our congressional guide, which in turn are based on the Results Act; OMB Circular No. A-11, Part 2; and other related guidance. In the guides, we collapsed the Results Act’s requirements for annual performance plans into three core questions that focus on performance goals and measures, strategies and resources, and verification and validation. The criteria from the guides were supplemented by practices and examples included in our report Agency Performance Plans: Examples of Practices That Can Improve Usefulness to Decisionmakers (GAO/GGD/AIMD-99-69, Feb. 26, 1999), which builds on the improvement opportunities identified in our fiscal year 1999 performance plans’ summary report. From that work, we derived practices to identify each plan’s strengths and weaknesses and determined the extent to which the plan includes three key elements of informative performance plans: (1) clear picture of intended performance, (2) specific discussion of strategies and resources, and (3) confidence that performance information will be credible. For each of these three key elements, we classified the plan into one of four summary characterizations based on the degree to which the individual plan contains the associated practices. To assess the first key element, clarity of the picture of intended performance across the agency, we based our judgments on the degree to which an agency’s performance plan contains the following practices: 1. Sets of performance goals and measures that address program results and the important dimensions of program performance and balance competing program priorities. If appropriate, the plan contains intermediate goals and measures, such as outputs or intermediate outcomes that are linked to end outcomes and show progress or contribution to intended program results. If appropriate, the plan contains projected target levels of performance for current and multiyear goals to convey what a program is expected to achieve for that year and in the long term. 2. Baseline and trend data for past performance to show how a program’s anticipated performance level compares with improvements or declines in past performance. 3. Performance goals or strategies to resolve mission-critical management problems. 4. Identification of crosscutting programs (i.e., those programs that contribute to the same or similar results), complementary performance goals and common or complementary performance measures to show how differing program strategies are mutually reinforcing, and planned coordination strategies. To address the first element concerning the degree to which a plan provides a clear picture of intended performance across the agency, we characterized each plan in one of four ways: (1) provides a clear picture of intended performance across the agency, (2) provides a general picture, (3) provides a limited picture, or (4) provides an unclear picture. To assess the second key element, specificity of the discussion of strategies and resources the agency will use to achieve performance goals, we based our judgments on the degree to which an agency’s performance plan contains the following practices: 5. Budgetary resources related to the achievement of performance goals. 6. Strategies and programs linked to specific performance goals and descriptions of how the strategies and programs will contribute to the achievement of those goals. Specifically, does the plan do the following: Identify planned changes to program approaches in order to accomplish results-oriented goals. For example, the plan may include a description of performance partnerships with state, local, and third party providers that focus accountability while providing the flexibility needed to achieve results. Explain, through a brief description or reference to a separate document, how proposed capital assets and mission-critical management systems (e.g., information technology, financial management, budget, procurement, grants management, and other systems) will support the achievement of program results. 7. A brief description or reference to a separate document concerning how the agency plans to build, maintain, and marshal the human capital needed to achieve results. 8. Strategies to leverage or mitigate the effects of external factors on the accomplishment of performance goals. To address the second element concerning the extent to which a plan includes specific discussions of strategies and resources, we characterized each plan in one of four ways: (1) contains specific discussion of strategies and resources, (2) general discussion, (3) limited discussion, or (4) no discussion. To assess the final key element, level of confidence that the agency’s performance information will be credible, we based our judgments on the degree to which an agency’s performance plan contains the following practices: 9. Describes efforts to verify and validate performance data. 10. Describes data limitations, including actions to compensate for unavailable or low-quality data, and the implications of data limitations for assessing performance. To address the third element concerning the extent to which a plan provides confidence that performance information will be credible, we characterized each plan in one of four ways as providing: (1) full confidence, (2) general confidence, (3) limited confidence, or (4) no confidence. To determine the degree of improvement in the individual plans, we also examined the extent to which an agency’s fiscal year 2000 performance plan addressed the weaknesses that we identified in reviewing its fiscal year 1999 plan. Based on our analysis, we determined the level of improvement in agencies’ plans by using one of four characterizations: (1) much improvement; (2) moderate improvement; (3) little, if any, improvement; or (4) no improvement. As needed, we also reviewed parts of selected agencies’ fiscal year 2000 annual performance plans to supplement our analysis of our individual agency reviews and to elaborate further on particular issues. To further help us identify opportunities for agencies to improve future performance plans, we also drew on other related work. We reviewed agency performance plans from February through June 1999 and did our work according to generally accepted government auditing standards. On July 1, 1999, we requested comments from the Director of OMB on a draft of this report. On July 12, 1999, a responsible OMB senior staff member stated that the agency did not have any comments on this report. On April 13, 1999, we briefed congressional staff on our analysis of the Department of Agriculture’s (USDA) performance plan for fiscal year 2000. The following are our overall observations on the plan. The complete text (GAO/RCED-99-187) of our observations and USDA’s comments on those observations are available at http://www.gao.gov/cgi-bin/getrpt?rced-99-187 only on the Internet. We provided a draft of this summary to the U.S. Department of Agriculture on April 14, 1999, for its review and comment. We met with USDA’s Chief Financial Officer; the Director, Planning and Accountability Division; and other USDA officials from the Office of the Chief Financial Officer and the Office of Budget and Program Analysis to obtain their oral comments. The officials generally concurred with our observations, describing them as “fair and balanced.” They provided clarifying comments and technical corrections, which we have incorporated as appropriate. See http://www.gao.gov/cgi-bin/getrpt?rced-99-187 for additional information on USDA’s comments (in GAO/RCED 99-187) on our observations. On April 9, 1999, we briefed congressional staff on our analysis of the Department of Commerce’s performance plan for fiscal year 2000. The following are our overall observations on the plan. The complete text (GAO/GGD-99-117R) of our observations and Commerce’s comments on those observations are available at http://www.gao.gov/corresp/gg99117r.pdf only on the Internet. On June 4, 1999, we received Commerce’s written comments from the Acting Chief Financial Officer and Assistant Secretary for Administration on a draft of this analysis of Commerce’s fiscal year 2000 annual performance plan. She agreed that Commerce needs to strengthen its efforts to verify and validate performance data. She said that Commerce believes that the verification and validation of performance data is a critical issue and that it devoted considerable effort over the past year to defining its methodology and expects to focus in the coming year on ensuring that its performance measurements are reliable and useful. However, she said that there are two major areas in which Commerce disagrees with the draft. These areas are our (1) characterization that Commerce has made only “moderate” improvement relative to its fiscal year 1999 plan and (2) observation that the plan does not provide a complete picture of intended performance for the 2000 Decennial Census. See http://www.gao.gov/corresp/gg99117r.pdf for additional information on Commerce’s comments (in GAO/GGD-99-117R) on our observations. On April 16, 1999, we briefed congressional staff on our analysis of the Department of Defense’s (DOD) performance plan for fiscal year 2000. The following are our overall observations on the plan. The complete text (GAO/NSIAD-99-178R) of our observations and DOD’s comments on those observations is available at http://www.gao.gov/corresp/ns99178r.pdf only on the Internet. In oral comments on a draft of our observations, DOD did not agree with our overall assessment of the performance plan and asked that we include their view on two issues. First, DOD officials stated the principal output and outcome of DOD’s annual budget is a specified military force ready to go to war, and the fiscal year 2000 performance plan defines performance goals relevant to that objective. The performance goals establish a measurable path to achievement of the corporate goals articulated in the Department’s strategic plan. Second, officials stated that our characterization of the plan as being of limited use to decisionmakers does not fully reflect their views. They noted that this year’s plan contains more information and is more useful to internal departmental decisionmakers than last year’s plan. However, they recognized that the plan could be made more useful to external decisionmakers by including additional information such as more outcome-oriented measures for business operations such as logistics, which accounts for over half of the Department’s budget. We agree that ready forces are a key output of DOD’s efforts. However, we continue to believe that better results information will require a qualitative assessment of the conduct of military missions, as well as an assessment of investments in technology to improve weapons capabilities. See http://www.gao.gov/corresp/ns99178r.pdf for additional information on DOD’s comments (in GAO/NSIAD-99-178R) on our observations. On April 9, 1999, we briefed congressional staff on our analysis of the Department of Education’s performance plan for fiscal year 2000. The following are our overall observations on the plan. The complete text (GAO/HEHS-99-136R) of our observations and the Department of Education’s comments on those observations are available at http://www.gao.gov/corresp/he99136r.pdf only on the Internet. On May 4, 1999, we obtained oral comments from Department of Education officials, including the Director of Planning and Evaluation Service and staff from its Office of Legislation and Congressional Affairs, on a draft of our summary of Education’s fiscal year 2000 annual performance plan. These officials generally agreed with our assessment. They said it provided an accurate and constructive opinion of their fiscal year 2000 performance plan. They also acknowledged that additional work is needed in certain areas of the plan and they plan to continue working with the OMB and others to further improve the plan. See http://www.gao.gov/corresp/he99136r.pdf for additional information on Education’s comments (in GAO/HEHS-99-136R) on our observations. On April 8, 1999, we briefed congressional staff on our analysis of the Department of Energy’s (DOE) performance plan for fiscal year 2000. The following are our overall observations on the plan. The complete text (GAO/RCED-99-218R) of our observations and DOE’s comments on those observations are available at http://www.gao.gov/corresp/rc99218r.pdf only on the Internet. On April 13, 1999, we obtained oral comments from the Director, Strategic Planning, Budget & Program Evaluation and members of his office, on a draft of our analysis of DOE’s fiscal year 2000 annual performance plan. These officials generally agreed with our observations but pointed out several areas they felt needed correction and clarification. DOE believes its use of goals for three fiscal years in the annual plan provides a context for evaluating the reasonableness of the goals. However, DOE also believes the goals of the strategic plan need to be quantifiable to provide a more clear context. We revised the language in the report to show that the goals of the strategic plan need to be quantifiable. Additionally, DOE believes that we improperly used a weakness in its estimating of environmental liabilities in its performance verification and validation process because it is not a performance issue. We agree and deleted this information from the report. Finally, since DOE intends to complete all of its “Year 2000” activities by September 30, 1999, it did not include goals for this effort in its annual plan. We believe that DOE should include Year 2000 goals in the annual plan because (1) their tight schedule leaves little time to address unanticipated concerns and (2) several agencies will be developing and testing some Year 2000 strategies through the end of 1999. See http://www.gao.gov/corresp/rc99218r.pdf for additional information on DOE’s comments (in GAO/RCED-99-218R) on our observations. On April 13, 1999, we briefed congressional staff on our analysis of the Department of Health and Human Services’ (HHS) performance plan for fiscal year 2000. The following are our overall observations on the plan. The complete text (GAO/HEHS-99-149R) of our observations and HHS’ comments on those observations are available at http://www.gao.gov/corresp/he99149r.pdf only on the Internet. On April 27, 1999, the HHS Assistant Secretary of Management and Budget provided written comments on our draft observations on the HHS plan. The Department generally did not agree with our assessment; it also stated that it will continue to work with the Office of Management and Budget and HHS’ performance partners to ensure that future plans continue to provide data that support budget and program decisions. HHS disagreed with our observations in five specific areas: (1) agency performance goals are not consistently measurable; (2) the plan does not adequately address key management challenges; (3) HHS will not have credible data; (4) HHS does not adequately discuss the strategies and resources the agency will use to achieve its performance goals; and (5) HHS does not provide sufficient information about strategies to mitigate external factors and to marshal the human capital needed to achieve results. We made technical corrections where appropriate, but continue to believe that our assessment was accurate. For example, we noted that some significant programs do not have performance goals. See http://www.gao.gov/corresp/he99149r.pdf for additional information on HHS’ comments (in GAO/HEHS-99-149R) on our observations. On April 22, 1999, we briefed congressional staff on our analysis of the Department of Housing and Urban Development’s (HUD) performance plan for fiscal year 2000. The following are our overall observations on the plan. The complete text (GAO/RCED-99-208R) of our observations and HUD’s comments on those observations are available at http://www.gao.gov/corresp/rc99208r.pdf only on the Internet. We provided HUD with a draft of this report for review and comment. On May 11, 1999, Deputy Secretary Saul N. Ramirez responded with written comments. In these comments, HUD generally agreed with our report, stated that it captured the annual performance plan’s major improvements, and stated that the Department is committed to taking specific actions to improve in the areas we identified. However, HUD raised specific concerns about our observations on the credibility of its performance measurement data and its interagency coordination strategies. We did not revise our observations as a result of the comments; however, we modified the report, where appropriate, to clarify our observations on how the plan could be improved. See http://www.gao.gov/corresp/rc99208r.pdf for additional information on HUD’s comments (in GAO/RCED-99-208R) on our observations. On May 17, 1999, we briefed congressional staff on our analysis of the Department of the Interior’s performance plan for fiscal year 2000. The following are our overall observations on the plan. The complete text (GAO/RCED-99-207R) of our observations and Interior’s comments on those observations are available at http://www.gao.gov/corresp/rc99207r.pdf only on the Internet. On April 7, 1999, we met with Interior officials, including the Deputy Assistant Secretary of Budget and Finance, the Director of the Office of Planning and Performance Management, and the Director of the Office of Financial Management to obtain agency comments. We were subsequently provided written comments on April 9, 1999. Interior officials believe that its fiscal year 2000 performance plan meets the requirements of the Results Act and the guidelines provided by the Office of Management and Budget in Circular A-11. However, the Department acknowledges that improvements can be made to its plan. Interior also noted that the development of its performance plan is an iterative process and that progress will continue as the agency gains additional knowledge and experience with performance-based, results-oriented management. The Department did not agree with our observation that it had not made significant progress in the area of validating and verifying performance information. While they believe that some improvements can be made, they said that the fiscal year 2000 plan includes validation processes for each measure and that we did not give them enough credit for the progress they made in describing the information on the validation and verification measures in their plans. We agreed that the department improved its discussion of validation and verification measures over its fiscal year 1999 plans and that additional improvements can be made. See http://www.gao.gov/corresp/rc99207r.pdf for additional information on Interior’s comments (in GAO/RCED-99-207R) on our observations. On March 30, 1999, we briefed congressional staff on our analysis of the Department of Justice’s performance plan for fiscal year 2000. The following are our overall observations of the plan. The complete text (GAO/GGD-99-111R) of our observations and Justice’s comments on those observations is available at http://www.gao.gov/corresp/gg99111r.pdf only on the Internet. generally agreed with the draft of our analysis. See http://www.gao.gov/corresp/gg99111r.pdf for additional information on Justice’s comments (in GAO/GGD-99-111R) on our observations. On April 9, 1999, we briefed congressional staff on our analysis of the Department of Labor’s performance plan for fiscal year 2000. The following are our overall observations on the plan. The complete text (GAO/HEHS-99-152R) of our observations and Labor’s comments on those observations are available at http://www.gao.gov/corresp/he99152r.pdf only on the Internet. On April 21, 1999, we obtained written comments from the Department of Labor’s Assistant Secretary for Administration and Management on a draft of our analysis of the Department of Labor’s fiscal year 2000 annual performance plan. Labor generally concurred with our observations of the plan’s strengths and weaknesses and acknowledged the needed plan improvements in the areas of improved data quality, better descriptions of collaboration efforts, and clearer linkages between strategies and goals. Labor also stated that it will use our analysis of its fiscal year 2000 plan as a basis for improvements to the next version of its performance plan. See http://www.gao.gov/corresp/he99152r.pdf for additional information on Labor’s comments (in GAO/HEHS-99-152R) on our observations. On April 8, 1999, we briefed congressional staff on our analysis of the Department of State’s performance plan for fiscal year 2000. The following are our overall observations on the plan. The complete text (GAO/NSIAD- 99-183R) of our observations and the Department of State’s comments on those observations are available at http://www.gao.gov/corresp/ns99183r.pdf only on the Internet. On April 13, 1999, we obtained comments from officials of State's Office of Management Policy and Planning and the Bureau of Finance and Management Policy on a draft of our analysis of the agency's fiscal year 2000 annual performance plan. These officials generally agreed with our analysis. However, they questioned the need for identifying the roles, responsibilities, and complementary performance goals and measures of other agencies with crosscutting programs. They believe that adding more detailed references to other agencies goes beyond what time and resources will allow. They also requested a more explicit discussion of the requirement that the plan show how State's personnel, capital assets, and mission-critical management systems contribute to achieving performance goals. We have included additional guidance on this issue in our analysis. See http://www.gao.gov/corresp/ns99183r.pdf for additional information on State’s comments (in GAO/NSIAD-99-183R) on our observations. On April 7, 1999, we briefed congressional staff on our analysis of the Department of Transportation’s (DOT) performance plan for fiscal year 2000. The following are our overall observations on the plan. The complete text (GAO/RCED-99-153) of our observations and DOT’s comments on those observations are available at http://www.gao.gov/cgi-bin/getrpt?rced-99-153 only on the Internet. We provided copies of a draft of these observations to DOT for review and comment. The Department stated that it appreciated GAO’s favorable review of its fiscal year 2000 performance plan and indicated that it had put much work into making improvements over the fiscal year 1999 plan by addressing our comments on that plan. DOT made several suggestions to clarify the discussion of its financial accounting system, which we incorporated. The Department acknowledged that work remains to be done to improve its financial accounting system and stated that it has established plans to do this. DOT also acknowledged the more general need for good data systems to implement the Results Act and indicated that it is working to enhance those systems. See http://www.gao.gov/cgi-bin/getrpt?rced-99-153 for additional information on DOT’s comments (in GAO/RCED-99-153) on our observations. On April 16, 1999, we briefed congressional staff on our analysis of the Department of the Treasury’s performance plan for fiscal year 2000. The following are our overall observations on the plan. The complete text (GAO/GGD-99-114R) of our observations and Treasury’s comments on those observations are available at http://www.gao/corresp/gg99114r.pdf only on the Internet. On June 14, 1999, we met with the Director of Treasury’s Office of Strategic Planning and Evaluation and members of his staff to obtain oral comments on a draft of this report. The officials generally agreed with our analysis and provided some technical comments, which we incorporated as appropriate. They also said that Treasury is continually trying to improve its strategic and performance plans. Among other things, Treasury plans to ensure that updates to its bureaus’ and offices’ strategic plans include goals for high-risk programs and major management challenges. In addition, Treasury’s Office of Inspector General plans to work with the bureaus and offices to help improve their capacity to provide confidence that the performance data used to measure progress are verified and validated. See http://www.gao/corresp/gg99114r.pdf for additional information on Treasury’s comments (in GAO/GGD-99-114R) on our observations. On April 6, 1999, we briefed congressional staff on our analysis of the Department of Veterans Affairs’ (VA) performance plan for fiscal year 2000. The following are our overall observations on the plan. The complete text (GAO/HEHS-99-138R) of our observations and VA’s comments on those observations are available at http://www.gao.gov/corresp/he99138r.pdf only on the Internet. measures for its major programs, OMB Circular A-11 recommends that plans include performance goals for management problems, particularly those that are mission critical, that could potentially impede achievement of program goals. See http://www.gao.gov/corresp/he99138r.pdf for additional information on VA’s comments (in GAO/HEHS-99-138R) on our observations. On April 16, 1999, we briefed congressional staff on our analysis of the Environmental Protection Agency’s (EPA) performance plan for fiscal year 2000. The following are our overall observations on the plan. The complete text (GAO/RCED-99-237R) of our observations and EPA’s comments on those observations are available at http://www.gao.gov/corresp/rc99237r.pdf only on the Internet. On April 13, 1999, we obtained comments from EPA on a draft of our analysis of the agency’s fiscal year 2000 annual performance plan. EPA generally agreed with our analysis and appreciated our constructive review, saying that it would continue to strive for improvements in its plan. The agency also commented on several of our observations and discussed its actions to improve the quality of its databases and information systems. See http://www.gao.gov/corresp/rc99237r.pdf for additional information on EPA’s comments (in GAO/RCED-99-237R) on our observations. On April 7, 1999, we briefed congressional staff on our analysis of the Federal Emergency Management Agency’s (FEMA) performance plan for fiscal year 2000. The following are our overall observations on the plan. The complete text (GAO/RCED-99-226R) of our observations and FEMA’s comments on those observations are available at http://www.gao.gov/corresp/rc99226r.pdf only on the Internet. questioned several of our observations, including noting that FEMA included information on external factors that could affect its ability to achieve its performance goals in both its September 30, 1997, strategic plan and within certain performance goals in the performance plan. We believe FEMA should include additional references to how specific external factors could have an impact on individual performance goals and the actions FEMA can take to mitigate these factors. In addition, FEMA’s Director clarified and updated certain information, which we incorporated in our observations where appropriate. See http://www.gao.gov/corresp/rc99226r.pdf for additional information on FEMA’s comments (in GAO/RCED-99-226R) on our observations. On April 7, 1999, we briefed congressional staff on our analysis of the General Services Administration’s (GSA) performance plan for fiscal year 2000. The following are our overall observations on the plan. The complete text (GAO/GGD-99-113R) of our observations and GSA’s comments on those observations are available at http://www.gao.gov/corresp/gg99113r.pdf only on the Internet. On March 30, 1999, GSA’s Chief Financial Officer, Director of Budget, and Managing Director for Planning provided oral agency comments on a draft of our analysis of GSA’s fiscal year 2000 performance plan. They generally agreed with our analysis and said it would help them correct the weaknesses we identified as they develop next year’s plan. See http://www.gao.gov/corresp/gg99113r.pdf for additional information on GSA’s comments (in GAO/GGD-99-113R) on our observations. On April 22, 1999, we briefed congressional staff on our analysis of the National Aeronautics and Space Administration’s (NASA) performance plan for fiscal year 2000. The following are our overall observations on the plan. The complete text (GAO/NSIAD-99-186R) of our observations and NASA’s comments on those observations are available at http://www.gao.gov/corresp/ns99186r.pdf only on the Internet. On April 27, 1999, we obtained written comments from NASA’s Associate Administrator on a draft of our analysis of NASA’s fiscal year 2000 annual performance plan. In commenting on a draft of our analysis, the Associate Deputy Administrator stated that the agency generally believes that our report is balanced and will endeavor to use our observations in improving the management of the agency. NASA raised concern about three issues that we identified in our analysis. One involved the inclusion of major management challenges identified by the NASA OIG in the plan. NASA stated that the report containing the OIG’s management challenges was issued subsequent to the agency’s formulation, selection, and submittal of its performance targets to the Office of Management and Budget. NASA contends that the OIG’s management challenges were identified too late to enable inclusion in the performance plan. See http://www.gao.gov/corresp/ns99186r.pdf for additional information on NASA’s comments (in GAO/NSIAD-99-186R) on our observations. On April 7, 1999, we briefed congressional staff on our analysis of the National Science Foundation (NSF) performance plan for fiscal year 2000. The following are our overall observations on the plan. The complete text (GAO/RCED-99-206R) of our observations and NSF’s comments on those observations are available at http://www.gao.gov/corresp/rc99206r.pdf only on the Internet. On April 21, 1999, we obtained comments from NSF officials, including the Deputy Director, on a draft of our analysis of the agency’s fiscal year 2000 annual performance plan. These officials generally agreed with the observations made in the draft. They provided clarification on several points about the linkages between performance and resources and about issues concerning measurement and data verification and validation. We incorporated this information in the report as appropriate. NSF officials pointed out that the Foundation is one of the only agencies using the qualitative method to assess performance in research and education by using the alternative format. To test this approach, officials are using its Committees of Visitors process to assess performance for NSF’s first performance report for March 2000. This point was incorporated in the body of the report. See http://www.gao.gov/corresp/rc99206r.pdf for additional information on NSF’s comments (in GAO/RCED-99-206R) on our observations. On April 19, 1999, we briefed congressional staff on our analysis of the Nuclear Regulatory Commission’s (NRC) performance plan for fiscal year 2000. The following are our overall observations on the plan. The complete text (GAO/RCED-99-213R) of our observations and NRC’s comments on those observations are available at http://www.gao.gov/corresp/rc99213r.pdf only on the Internet. On April 12, 1999, we obtained oral comments from NRC staff, including the Deputy Chief Financial Officer, Office of the Chief Financial Officer, on a draft of our analysis of the fiscal year 2000 annual performance plan. With the exception of the information on NRC’s performance information, the agency generally agreed with our observations. In addition, NRC staff said that the agency is committed to moving to an outcome-oriented, performance-based organization and recognizes that a multiyear effort will be required to do so. They also said that it would be very difficult to show the impact that the agency’s programs have on nuclear industry performance or the safe operation of plants. See http://www.gao.gov/corresp/rc99213r.pdf for additional information on NRC’s comments (in GAO/RCED-99-213R) on our observations. On April 6, 1999, we briefed congressional staff on our analysis of the Office of Personnel Management’s (OPM) performance plan for fiscal year 2000. The following are our overall observations on the plan. The complete text (GAO/GGD-99-125) of our observations and OPM’s comments on those observations are available at http://www.gao.gov/cgi-bin/getrpt?ggd-99-125 only on the Internet. information on OPM’s comments (in GAO/GGD-99-125) on our observations. On April 8, 1999, we briefed congressional staff on our analysis of the Small Business Administration’s (SBA) performance plan for fiscal year 2000. The following are our overall observations on the plan. The complete text (GAO/RCED-99-211R) of our observations and SBA’s comments on those observations are available at http://www.gao.gov/corresp/rc99211r.pdf only on the Internet. recognizes that SBA improved its fiscal year 2000 performance plan and gives SBA credit for such improvements. At the same time, it is our position that SBA’s fiscal year 2000 plan has improved little, if any, over the agency’s fiscal year 1999 plan because a number of key weaknesses that we observed in the fiscal year 1999 plan remain in the fiscal year 2000 plan. See http://www.gao.gov/corresp/rc99211r.pdf for additional information on SBA’s comments (in GAO/RCED-99-211R) on our observations. On April 8, 1999, we briefed congressional staff on our analysis of the Social Security Administration’s (SSA) performance plan for fiscal year 2000. The following are our overall observations on the plan. The complete text (GAO/HEHS-99-162R) of our observations and SSA’s comments on those observations are available at http://www.gao.gov/corresp/he99162r.pdf only on the Internet. access and manipulation. The agency also noted that its systems have undergone tests to ensure that intrusions should not occur. We agree that progress has been made in the area of internal controls. However, vulnerabilities remain and further actions are needed to ensure the integrity of SSA’s performance data. See http://www.gao.gov/corresp/he99162r.pdf for additional information on SSA’s comments (in GAO/HEHS-99-162R) on our observations. On April 16, 1999, we briefed congressional staff on our analysis of the U.S. Agency for International Development’s (USAID) performance plan for fiscal year 2000. The following are our overall observations on the plan. The complete text (GAO/NSIAD-99-188R) of our observations and USAID’s comments on those observations are available at http://www.gao.gov/corresp/ns99188r.pdf only on the Internet. On May 4, 1999, we obtained comments from USAID officials, including the Deputy Assistant Administrator, Bureau for Policy and Program Coordination, on a draft of our analysis of USAID’s fiscal year 2000 annual performance plan. These officials generally agreed with our analysis. In addition, with respect to our comments regarding the need to link agency goals with individual country goals, they noted that USAID is currently developing methods of improving the linkage among the Annual Performance Plan, the Annual Performance Report, and the country coverage provided in USAID’s Congressional Presentation. They also noted that they are exploring ways to improve the quality of data used to assess performance. See http://www.gao.gov/corresp/ns99188r.pdf for additional information on USAID’s comments (in GAO/NSIAD-99-188R) on our observations. In addition to the individual named above, Dottie Self, Joe Wholey, Lauren Alpert, Jan Bogus, Donna Byers, Laura Castro, Anita Pilch, Susan Ragland, Kim Raheb, Lisa Shames, and Marlene Zacharias made key contributions to this report. The examples used in this report are drawn from the assessments of the individual agency annual performance plans that were done by staff across GAO. Thus, in addition to the individuals named above, the staff who worked on the individual agency plan assessments also made important contributions to this report. The individuals are identified in the separate products on agency plans available on the Internet. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the fiscal year (FY) 2000 performance plans of the 24 agencies covered by the Chief Financial Officers Act, focusing on the: (1) extent to which the agencies' plans include the three key elements of informative performance plans: (a) clear pictures of intended performance; (b) specific discussions of strategies and resources; and (c) confidence that performance information will be credible; and (2) degree of improvement the FY 2000 performance plans represent over the FY 1999 plans. GAO noted that: (1) on the whole, agencies' FY 2000 performance plans show moderate improvements over the FY 1999 plans and contain better information and perspective; (2) however, key weaknesses remain, and important opportunities exist to improve future plans; (3) the plans provide general pictures of intended performance across the agencies suggesting that important opportunities for continued improvements still remain to be addressed; (4) while all the plans include baseline and trend data for at least some of their goals and measures, inconsistent attention is given to resolving mission-critical management challenges and program risks; (5) these management challenges and program risks continue to seriously undermine the federal government's performance and to leave it vulnerable to billions of dollars in waste, fraud, abuse, and mismanagement; (6) agencies could also provide clearer pictures of intended performance by providing greater attention to crosscutting program issues; (7) coordinating crosscutting programs is important because mission fragmentation and program overlap are widespread across the federal government; (8) most agencies' plans show some improvement in their recognition of crosscutting program efforts; (9) however, few plans attempt the more challenging tasks of discussing planned strategies for coordination and establishing complementary performance goals and common or complementary performance measures; (10) continued progress on this issue is important because GAO has found that unfocused and uncoordinated crosscutting programs waste scarce funds, confuse and frustrate program customers, and limit overall program effectiveness; (11) crosscutting programs involve more than one agency, and coordination therefore requires the ability to look across agencies and ensure that the appropriate coordination is taking place; (12) agencies' discussions of how resources and strategies will be used to achieve results show mixed progress; (13) some agencies show progress in making useful linkages between their budget requests and performance goals, while other agencies are not showing the necessary progress; (14) the continuing lack of confidence that performance information will be credible is also a source of major concern; (15) many agencies offer only limited indications that performance data will be credible; and (16) the inattention to ensuring that performance data will be sufficiently timely, complete, accurate, useful, and consistent is an important weakness in the performance plans.
The Rail Passenger Service Act of 1970 created Amtrak to provide intercity passenger rail service because existing railroads found such service to be unprofitable. Amtrak operates a 22,000-mile network, primarily over freight railroad tracks, providing service to 46 states and the District of Columbia. Amtrak owns about 650 miles of track, primarily on the Northeast Corridor between Boston, Massachusetts, and Washington, D.C. In fiscal year 2004, Amtrak served about 25 million passengers, or about 68,640 passengers per day. According to Amtrak, about two-thirds of its ridership is wholly or partially on the Northeast Corridor. The Northeast Corridor is the busiest passenger rail line in the country, and some 200 million Amtrak and commuter rail travelers use the Corridor, or some part of it, each year. On some portions of the Northeast Corridor, Amtrak provides high-speed rail service (up to 150 miles per hour). The high-speed Acela program is the centerpiece of Amtrak’s intercity passenger rail system, with its financial contributions to the company exceeding that of all other routes combined. Acquisition of the Acela trainsets occurred as part of NHRIP. NHRIP, and its predecessor the Northeast Corridor Improvement Project, date back to the late 1970’s and represented a multiyear, multibillion collection of capital improvements to the Northeast Corridor that included electrifying the line between New Haven, Connecticut, and Boston, Massachusetts, improving tracks, signals, and other infrastructure, and acquiring high- speed trains. These efforts were designed to achieve a 3-hour trip time between New York City and Boston. As of March 2003, Amtrak, commuter railroads, and others had spent about $3.2 billion on the project. In 1996, Amtrak executed contracts with train manufacturers Bombardier and Alstom to build 20 high-speed trainsets and 15 electric high- horsepower locomotives; construct three maintenance facilities; and provide maintenance services for the Acela trainsets. The trainsets, locomotives, and facilities contracts totaled $730 million. Bombardier and Alstom, referred to as the Consortium, created the Northeast Corridor Management Service Corporation (NecMSC) to manage the facilities and maintain the trainsets, including supervising Amtrak maintenance employees. Amtrak pays NecMSC a per-mile rate—that is, a fixed rate for each mile the Acela trains travel—on a monthly basis to provide management and maintenance services at three maintenance facilities. Amtrak’s Acela program has undergone a number of events since its inception, which has included the execution of the original contracts in 1996, delivery of the first trainset in October 2000, and the filing of lawsuits by both Bombardier and Amtrak in November 2001 and 2002, respectively(see fig. 1). The trainsets were also withdrawn from service for several weeks in August 2002. In March 2004, Amtrak and Bombardier signed an agreement to settle the lawsuits, which calls for Amtrak to conditionally assume trainset maintenance in October 2006, assuming conditions of the settlement have been met. The last warranties for the trainsets expire in 2021. Significant issues and controversy have impacted the Acela program since its inception. What started out as a relatively simple procurement of train equipment evolved into a complex high-speed rail program, according to an Amtrak official. The Acela trainset is a complex piece of equipment with state-of-the-art electronics and was considered new technology for the United States. As such, it required additional time to develop and test, and the probability of expected and unexpected problems was high. Among the issues that the Acela program has encountered since its creation are the following: Potential difficulties due to new technology. Instead of purchasing “off- the-shelf” technology—that is, train equipment that was already designed, engineered, and in use—Amtrak decided to acquire “new” technology. An FRA official told us some components on the Acela trainset (such as power components and the tilt mechanism) were similar to that used on train equipment in other parts of the world but much of the technology on Acela trainsets was new. In addition, many of the components, whether new or existing technology, had never been used together. Further, this official said that because the components in the Acela trainsets had never before been designed as one unit, Acela was not an off-the-shelf technology train. Although Acela trainsets were essentially new technology and could be expected to require additional time to develop and test, Amtrak developed an ambitious schedule that called for shipment of the first trainset 32 months—just over 2½ years—after the notice to proceed was issued. According to an Amtrak official, the calendar and electrification delivery date drove the planning for the trainsets. Amtrak worked backwards from these due dates to try and fit project work into the timeline. Impacts from new safety standards to accommodate high-speed rail. During the 1996 to 2000 time frame, the same time period when the Acela trainsets were being acquired and manufactured, FRA, in consultation with Amtrak, was developing safety regulations related to high-speed rail operations. These included new rules covering track safety (to accommodate speeds of up to 200 miles per hour), passenger car safety, and train control. According to FRA officials, Amtrak was intimately involved in developing these standards to accomplish its vision of high- speed rail operations on the Northeast Corridor. FRA officials also noted that passenger car safety regulations did not exist prior to the mid-1990’s. Developed for safety purposes, these standards had a significant impact on the Acela trainsets. For example, the passenger car safety regulations required a crash energy management system in passenger cars that was designed to increase the strength of both car ends and side posts. FRA also prohibited the operation of high-speed trains (up to 150 miles per hour) in a push-pull manner. FRA officials acknowledged that the crash energy system increased the weight of the Acela trainsets but said such a system resulted in safer trains. Amtrak told us that prohibiting push-pull operation caused them to obtain 20 additional power cars for Acela at a cost of about $100 million. Manufacturing and production delays. The Acela program experienced a significant share of manufacturing and production delays. Under FRA’s 1994 master plan for NHRIP, developed in response to the Amtrak Authorization and Development Act, delivery of enough high-speed trains to initiate limited 3-hour service between Boston and New York City was expected by 1999. However, due to design and manufacturing delays, the first Acela trainsets were delivered about a year late, and revenue service using the trainsets did not begin until December 2000. Manufacturing and production delays began early in the procurement process. For example, our review of Consortium progress reports indicated that as early as October 1996, only months after the original contract was signed, change orders and design changes (mainly related to car interiors) were being made that were causing delays in production. In addition, train weight was increasing, a condition that continued to plague the trainsets throughout production. Amtrak attempted to require the Consortium to prepare recovery plans to keep the program on schedule, but we found little evidence of such plans in documents we reviewed. Regardless, these plans did not prevent the trainsets from being delivered about a year late. Abbreviated testing prior to placement in revenue service. Amtrak’s Acela trainsets also appeared to have had abbreviated testing prior to being deployed into revenue service. A fuller testing of the trainsets may have better identified the range of potential problems and defects that could be experienced prior to placing the trainsets in service. The maximum testing any one Acela trainset received was about 35,000 miles of testing —20,000 miles at the Transportation Test Center (Center) in Pueblo, Colorado, and 15,000 miles on the Northeast Corridor between 1999 and 2000. However, an FRA official believed testing of the trainsets was rushed and that additional testing at the Center should have been conducted. This official cited testing of Amtrak’s AEM-7 electric locomotive as an example of the testing that is normally done on new equipment. This locomotive, which was a new locomotive that entered service in the early 1980’s, was tested for 165,000 miles at the Center prior to placement in service. An FRA official also acknowledged that there were no minimum federal testing requirements for new high-speed trainsets, like Acela, only that such equipment comply with existing safety regulations. However, this official believed Amtrak was under both financial and time pressures to place the trainsets in service, in part because of delays in trainset production. Since placement into revenue service in 2000, the Acela has experienced a number of unexpected problems. One occurrence was in August 2002 when Amtrak was forced to withdraw the trains from service to address unexpected equipment problems (yaw damper brackets). The trainsets were not returned to complete service until October 2002. According to Amtrak, this withdrawal cost the corporation a net $17 million in lost revenue. In April 2005, Amtrak once again experienced unexpected problems with the trainsets due to equipment problems (cracks in brake assemblies). Again, the trainsets have been withdrawn from service and Amtrak has stated that it may be months before the trains are returned to service. Although Amtrak is placing substitute equipment into service, it can be expected that there will be revenue loss as well as damage to Amtrak’s image. As the procurement proceeded, tensions grew between Amtrak and the Consortium. Concerns about the quality of the Consortium’s work and Amtrak’s withholding of payments for the Acela trainsets resulted in the parties suing each other, each seeking $200 million in damages. In November 2001, Bombardier filed a suit alleging that Amtrak improperly withheld payments, failed to provide accurate information on infrastructure conditions, and changed design specifications during contract performance. In November 2002, Amtrak filed a suit alleging that the Consortium failed to meet trainset performance requirements. In addition, Amtrak alleged that the engineering was deficient, workmanship was poor, program management and quality control were inadequate, and the Consortium did not meet contract delivery schedules. Amtrak and the Consortium reached a negotiated settlement in March 2004, ending their legal dispute surrounding the Acela trainsets. As part of the settlement, Amtrak agreed to release a portion of the previously withheld funds to the Consortium and conditionally assume facility management and trainset maintenance responsibilities as soon as October 1, 2006, rather than in 2013, as originally planned. In general, under the settlement, the Consortium must complete modifications to the trainsets and locomotives; achieve established performance requirements for reliability, speed, and comfort; provide training to Amtrak staff; and provide and extend warranties (see fig. 2). The Consortium is also responsible for the transfer of technical information, rights to third-party contracts, parts information, permits, and licenses to Amtrak. In addition, the settlement requires that the Consortium provide technical services and information technology updates even after the transition date. Amtrak is required to create a transition plan, hire staff to manage the facilities and maintain the trainsets, and determine a parts procurement plan for the trainsets. Independent of the Acela brake problem being discussed today, Amtrak faces other risks and challenges to sustain the trainset and keep it operating efficiently. Achieving a successful transition is critical to the financial well-being of Amtrak given that the Acela program is such a significant source of its revenue. A successful transition of maintenance and management responsibilities for the Acela trainsets depends on whether Amtrak and the Consortium can address the numerous challenges. Key challenges include: Achieving trainset modifications and performance requirements. The Consortium must complete an extensive list of modifications to the trainsets, some of which are complex, before Amtrak will assume maintenance responsibilities. Although the Consortium has closed three- fourths of the items, they are behind schedule on completing the work on some remaining items. Amtrak has identified certain modifications that potentially may not be completed by October 1, 2006, and has concerns that other modifications may affect service reliability. The Consortium is also responsible for ensuring that the trainsets continue to meet reliability, speed, and comfort performance requirements. The trainsets have not yet met the minimum reliability performance requirement of traveling an average 17,500 miles between service failures. According to Amtrak, the period of time when the trainsets are out of service to resolve the brake problems will not likely be included in the measurement of this standard. Obtaining technical expertise for maintenance and completing training. Amtrak must secure a workforce with the technical expertise needed to maintain the trainsets. To achieve this, Amtrak is developing a new High Speed Rail Division to assume management and maintenance responsibilities, and it plans to hire at least 50 percent of NecMSC’s current staff to benefit from their knowledge and expertise. The Consortium and Amtrak must also develop and implement training programs needed to maintain the complex trainsets after the transition. The trainsets are technically complex and require considerable expertise to identify and make needed repairs and to troubleshoot difficult maintenance problems. According to Amtrak officials, ensuring that technicians are properly trained is one of the most critical points of the transition. As a result of the current brake problem, Amtrak is reevaluating its training materials. Based on the latest progress report (March 2005), troubleshooting training is slightly behind schedule, and Amtrak officials told us that management training has been temporarily delayed due to the brake problem. Under the transition plan, training is scheduled to be completed by October 1, 2005. Sufficiently funding maintenance and integrating responsibilities. Once the transition occurs, Amtrak will be responsible for maintenance costs to ensure continued trainset performance, including procuring parts and performing overhaul maintenance. Amtrak has experienced problems in the past with delays in completing the maintenance necessary to provide its conventional service; and if these problems continue, they could affect trainset performance and availability for revenue service. At the time of our review, Amtrak had not determined the level of funding necessary to provide regular maintenance and overhauls to the trainsets. Amtrak officials stated that despite the uncertainty of maintenance costs once the transition occurs, they estimate that the costs of managing the maintenance in-house will be no greater than the costs of paying NecMSC to perform the work. We believe the uncertain amount of future maintenance costs and possible lack of adequate funds may have a greater impact than anticipated. Amtrak must also successfully integrate the new maintenance responsibilities into its current organization. Development of a new division requires strategic planning, communication, and performance management. This may prove difficult for Amtrak as our past and ongoing work has shown its shortcomings in managing large-scale projects. Preparing a comprehensive implementation plan. Creating a comprehensive implementation plan that provides a blueprint of important steps; milestones; contingency plans if milestones are not met; measures for achieving results; and funding strategies will be important for a successful transition. Amtrak has created a critical path schedule for monitoring the status and completion of open items related to the settlement and holds regular meetings, both internally and with the Consortium, to discuss progress and issues that arise. Although Amtrak has taken actions to address the key challenges related to the settlement, these actions did not represent a comprehensive implementation plan, and we recommended in our December 2004 report that Amtrak develop such a plan that encompasses all aspects of the transition in order to ensure a successful transition. We also said that such a plan should include contingency plans, if milestones are not met. In light of recent events, we believe a comprehensive plan that identifies contingency actions could provide the steps necessary to help prevent postponement of the transition. Amtrak officials do not believe the current brake problems will impact the October 2006 transition date, however. Although the settlement agreement ensures that Amtrak will be protected by the extended trainset warranties and Amtrak has several methods of financial recourse, if the Consortium does not honor warranties, loss of revenue resulting from removal of trainsets from revenue service is not directly recoverable. For example, the settlement agreement included the extension of “bumper to bumper” trainset warranties on all trainsets for the next 5 months, until October 1, 2005. In addition, modifications to the trainsets that are currently under way or planned will be under warranty for 2 years after they are completed to Amtrak’s satisfaction. Amtrak also has several methods of financial recourse, if the Consortium does not honor warranties, including letters of credit that Amtrak may draw down. However, the full extent of the legal liability associated with the April 2005 brake problem has yet to be addressed by the parties. Amtrak officials told us that their first priority is getting the trainsets back in service. Amtrak is considering a number of possible actions regarding the brake problem, including assessing liquidated damages. As we reported in February 2004, Amtrak did not effectively manage the entire NHRIP project, of which Acela was a part. Among the problems we found were that (1) Amtrak’s management of this project was not comprehensive but was focused on the short term; (2) project management focused on separate components of the project, such as electrification and acquisition of the high-speed trains, and not the project as a whole; and (3) did not sufficiently address major infrastructure improvements needed to attain project trip-time goals. We also found that Amtrak lacked a comprehensive financial plan for the project and that Amtrak did not fully integrate stakeholder interests (commuter rail authorities and state governments), even though work that involved stakeholders was critical to achieving project goals. The overall results of this poor management was that many critical elements of the project were not completed, project costs and schedules increased considerably, and the project goal (3-hour trip time from Boston to New York City) was not attained. While there have been many benefits from the NHRIP, including faster trip times between Boston and New York City, Amtrak’s management of this project clearly demonstrates that Amtrak had difficulty keeping such a large-scale project focused, on-time, and on- budget. We also have ongoing work for this committee on Amtrak’management and performance issues that we plan to report on later this year. Mr. Chairman, that concludes my statement. I would be happy to answer any questions you or the Members of the Subcommittee might have. For further information, please contact JayEtta Z. Hecker at [email protected] or at (202) 512-2834. Individuals making key contributions to this statement include Kara Finnegan Irving, Bert Japikse, Richard Jorgenson, and Randall Williamson. Intercity Passenger Rail: Issues Associated with the Recent Settlement between Amtrak and the Consortium of Bombardier and Alstom, GAO-05- 152 (Washington, D.C.: Dec. 1, 2004). Intercity Passenger Rail: Amtrak’s Management of Northeast Corridor Improvements Demonstrates Need for Applying Best Practices, GAO-04-94 (Washington, D.C.: Feb. 27, 2004). Intercity Passenger Rail: Amtrak Needs to Improve Its Decisionmaking Process for Its Route and Service Proposals, GAO-02-398 (Washington, D.C.: Apr. 12, 2002) Intercity Passenger Rail: Potential Financial Issues in the Event That Amtrak Undergoes Liquidation. GAO-02-871 (Washington, D.C.: Sept. 20, 2002). Financial Management: Amtrak’s Route Profitability Schedules Need Improvement, GAO-02-912R (Washington, D.C.: July 15, 2002). Intercity Passenger Rail: Congress Faces Critical Decisions in Developing a National Policy, GAO-02-522T (Washington, D.C.: Apr. 11, 2002). Intercity Passenger Rail: The Congress Faces Critical Decisions About the Role of and Funding for Intercity Passenger Rail Systems, GAO-01-820T (Washington, D.C.: July 25, 2001). Intercity Passenger Rail: Amtrak Will Continue to Have Difficulty Controlling Its Costs and Meeting Capital Needs, GAO/RCED-00-138 (Washington, D.C.: May 31, 2000). Intercity Passenger Rail: Issues Associated With a Possible Amtrak Liquidation, GAO/RCED-98-60 (Washington, D.C.: Mar. 2, 1998). This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 1996, the National Railroad Passenger Corporation (Amtrak) executed contracts to build high-speed trainsets (a combination of locomotives and passenger cars) as part of the Northeast High Speed Rail Improvement Project. Since that time, Amtrak has experienced multiple challenges related to this program, including recently removing all trains from service due to brake problems. Amtrak has struggled since its inception to earn sufficient revenues and depends heavily on federal subsidies to remain solvent. The April 2005 action to remove the Acela trainsets--Amtrak's biggest revenue source--from service has only exacerbated problems by putting increased pressure on Amtrak's ridership and revenue levels. This testimony is based on GAO's past work on Amtrak and focuses on (1) background on problems related to the development of the Acela program, (2) summary of issues related to lawsuits between Amtrak and the train manufacturers and the related settlement, (3) key challenges associated with the settlement, and (4) initial observations on possible challenges in Amtrak managing large-scale projects. Significant issues and controversy have impacted the Acela program since its inception. According to Amtrak, what started out as a simple procurement of train equipment evolved into a complex high-speed rail program. Acela has encountered numerous difficulties due to such things as new technology and production delays. Even after Acela service began, unexpected problems were encountered, which required Amtrak to remove the trainsets from service, resulting in lost revenue. Concerns about the quality of the Consortium of train manufacturers' (Bombardier and Alstom) work and Amtrak's withholding of payments for the Acela trainsets resulted in the parties suing each other, each seeking $200 million in damages. Amtrak and the Consortium reached a negotiated settlement in March 2004. Although the settlement agreement protects Amtrak through certain warranties, loss of revenue resulting from removal of trains from service is not directly recoverable. Under the settlement, Amtrak is conditionally scheduled to assume maintenance functions from the Consortium in October 2006. Aside from the current problems, Amtrak faces other risks and challenges to the recent settlement, including obtaining technical expertise and providing sufficient funding for maintenance. Achieving a successful transition is critical to Amtrak given the importance of the Acela program. The recent brake problems may impact the transition through such things as delayed management training. As GAO reported in February 2004, Amtrak had difficulties managing the Northeast High Speed Rail Improvement Project and many critical elements of the project were not completed and the project goal of a 3-hour trip time between Boston and New York City was not attained. GAO has ongoing work addressing Amtrak management and performance issues that GAO plans to report on later this year.
The concept of “universal service” has traditionally meant providing residential telephone subscribers with nationwide access to basic telephone services at reasonable rates. The Telecommunications Act of 1996 broadened the scope of universal service to include, among other things, support for schools and libraries. The act instructed the commission to establish a universal service support mechanism to ensure that eligible schools and libraries have affordable access to and use of certain telecommunications services for educational purposes. In addition, Congress authorized FCC to “establish competitively neutral rules to enhance, to the extent technically feasible and economically reasonable, access to advanced telecommunications and information services for all public and nonprofit elementary and secondary school classrooms . . . and libraries. . . .” Based on this direction, and following the recommendations of a Federal-State Joint Board on Universal Service, FCC established the schools and libraries universal service mechanism that is commonly referred to as the E-rate program. The program is funded through statutorily mandated payments by companies that provide interstate telecommunications services. Many of these companies, in turn, pass their contribution costs on to their subscribers through a line item on subscribers’ phone bills. FCC capped funding for the E-rate program at $2.25 billion per year, although funding requests by schools and libraries can greatly exceed the cap. For example, schools and libraries requested more than $4.2 billion in E-rate funding for the 2004 funding year. In 1998, FCC appointed USAC as the program’s permanent administrator, although FCC retains responsibility for overseeing the program’s operations and ensuring compliance with the commission’s rules. In response to congressional conference committee direction, FCC has specified that USAC “may not make policy, interpret unclear provisions of the statute or rules, or interpret the intent of Congress.” USAC is responsible for carrying out the program’s day-to-day operations, such as maintaining a Web site that contains program information and application procedures; answering inquiries from schools and libraries; processing and reviewing applications; making funding commitment decisions and issuing funding commitment letters; and collecting, managing, investing, and disbursing E-rate funds. FCC permits—and in fact relies on—USAC to establish administrative procedures that program participants are required to follow as they work through the application and funding process. Under the E-rate program, eligible schools, libraries, and consortia that include eligible schools and libraries may receive discounts for eligible services. Eligible schools and libraries may apply annually to receive E- rate support. The program places schools and libraries into various discount categories, based on indicators of need, so that the school or library pays a percentage of the cost for the service and the E-rate program funds the remainder. E-rate discounts range from 20 percent to 90 percent. USAC reviews all of the applications and related forms and issues funding commitment decision letters. Generally, it is the service provider that seeks reimbursement from USAC for the discounted portion of the service rather than the school or library. FCC established an unusual structure for the E-rate program but has never conducted a comprehensive assessment of which federal requirements, policies, and practices apply to the program, to USAC, or to the Universal Service Fund itself. FCC only recently began to address a few of these issues. The Telecommunications Act of 1996 neither specified how FCC was to administer universal service to schools and libraries nor prescribed the structure and legal parameters of the universal service mechanisms to be created. The Telecommunications Act required FCC to consider the recommendations of the Federal-State Joint Board on Universal Service and then to develop specific, predictable, and equitable support mechanisms. Using the broad language of the act, FCC crafted an ambitious program for schools and libraries—roughly analogous to a grant program—and gave the program a $2.25 billion annual funding cap. To carry out the day-to-day activities of the E-rate program, FCC relied on a structure it had used for other universal service programs in the past—a not-for-profit corporation established at FCC’s direction that would operate under FCC oversight. However, the structure of the E-rate program is unusual in several respects compared with other federal programs: FCC appointed USAC as the permanent administrator of the Universal Service Fund, and FCC’s Chairman has final approval over USAC’s Board of Directors. USAC is responsible for administering the program under FCC orders, rules, and directives. However, USAC is not part of FCC or any other government entity; it is not a government corporation established by Congress; and no contract or memorandum of understanding exists between FCC and USAC for the administration of the E-rate program. Thus, USAC operates and disburses funds under less explicit federal ties than many other federal programs. Questions as to whether the monies in the Universal Service Fund should be treated as federal funds have troubled the program from the start. Even though the fund has been listed in the budget of the United States and, since fiscal year 2004, has been subject to an annual apportionment from OMB, the monies are maintained outside of Treasury accounts by USAC and some of the monies have been invested. The United States Treasury implements the statutory controls and restrictions involving the proper collection and deposit of appropriated funds, including the financial accounting and reporting of all receipts and disbursements, the security of appropriated funds, and agencies’ responsibilities for those funds. As explained below, appropriated funds are subject, unless specifically exempted by law, to a variety of statutory controls and restrictions. These controls and restrictions, among other things, limit the purposes for which federal funds can be used and provide a scheme of accountability for federal monies. Key requirements are in Title 31 of the United States Code and the appropriate Treasury regulations, which govern fiscal activities relating to the management, collection, and distribution of public money. Since the inception of the E-rate program, FCC has struggled with identifying the nature of the Universal Service Fund and the managerial, fiscal, and accountability requirements that apply to the fund. FCC’s Office of Inspector General first looked at the Universal Service Fund in 1999 as part of its audit of the commission’s fiscal year 1999 financial statement because FCC had determined that the Universal Service Fund was a component of FCC for financial reporting purposes. During that audit, the FCC IG questioned commission staff regarding the nature of the fund and, specifically, whether it was subject to the statutory and regulatory requirements for federal funds. In the next year’s audit, the FCC IG noted that the commission could not ensure that Universal Service Fund activities were in compliance with all laws and regulations because the issue of which laws and regulations were applicable to the fund was still unresolved at the end of the audit. FCC officials told us that the commission has substantially resolved the IG’s concerns through recent orders, including FCC’s 2003 order that USAC begin preparing Universal Service Fund financial statements consistent with generally accepted accounting principles for federal agencies (GovGAAP) and keep the fund in accordance with the United States Government Standard General Ledger. While it is true that these steps and other FCC determinations discussed below should provide greater protections for universal service funding, FCC has addressed only a few of the issues that need to be resolved. In fact, staff from the FCC’s IG’s office told us that they do not believe the commission’s GovGAAP order adequately addressed their concerns because the order did not comprehensively detail which fiscal requirements apply to the Universal Service Fund and which do not. FCC has made some determinations concerning the status of the Universal Service Fund and the fiscal controls that apply. For example, FCC has concluded that the Universal Service Fund is a permanent indefinite appropriation subject to the Antideficiency Act and that its issuance of funding commitment letters constitutes recordable obligations for purposes of the act. We agree with FCC’s determinations on these issues, as explained in detail in appendix I. However, FCC’s conclusions concerning the status of the Universal Service Fund raise further issues relating to the collection, deposit, obligation, and disbursement of those funds—issues that FCC needs to explore and resolve comprehensively rather than in an ad hoc fashion as problems arise. Status of funds as appropriated funds. In assessing the financial statement reporting requirements for FCC components in 2000, FCC concluded that the Universal Service Fund constitutes a permanent indefinite appropriation (i.e., funding appropriated or authorized by law to be collected and available for specified purposes without further congressional action). We agree with FCC’s conclusion. Typically, Congress will use language of appropriation, such as that found in annual appropriations acts, to identify a fund or account as an appropriation and to authorize an agency to enter into obligations and make disbursements out of available funds. Congress, however, appropriates funds in a variety of ways other than in regular appropriations acts. Thus, a statute that contains a specific direction to pay and a designation of funds to be used constitutes an appropriation. In these statutes, Congress (1) authorizes the collection of fees and their deposit into a particular fund, and (2) makes the fund available for expenditure for a specified purpose without further action by Congress. This authority to obligate or expend collections without further congressional action constitutes a continuing appropriation or a permanent appropriation of the collections. Because the Universal Service Fund’s current authority stems from a statutorily authorized collection of fees from telecommunications carriers and the expenditure of those fees for a specified purpose (that is, the various types of universal service), it meets both elements of the definition of a permanent appropriation. Decision regarding the Antideficiency Act. As noted above, in October 2003, FCC ordered USAC to prepare financial statements for the Universal Service Fund, as a component of FCC, consistent with GovGAAP, which FCC and USAC had not previously applied to the fund. In February 2004, staff from USAC realized during contractor-provided training on GovGAAP procedures that the commitment letters sent to beneficiaries (notifying them whether or not their funding is approved and in what amount) might be viewed as “obligations” of appropriated funds. If so, and if FCC also found the Antideficiency Act—which does not allow an agency or program to make obligations in excess of available budgetary resources—to be applicable to the E-rate program, then USAC would need to dramatically increase the program’s cash-on-hand and lessen the program’s investments to provide budgetary authority sufficient to satisfy the Antideficiency Act. As a result, USAC suspended funding commitments in August 2004 while waiting for a commission decision on how to proceed. At the end of September 2004—facing the end of the fiscal year—FCC decided that commitment letters were obligations, that the Antideficiency Act did apply to the program, and that USAC would need to immediately liquidate some of its investments to come into compliance with the Antideficiency Act. According to USAC officials, the liquidations cost the fund approximately $4.6 million in immediate losses and could potentially result in millions in foregone annual interest income. FCC was slow to recognize and address the issue of the applicability of the Antideficiency Act, resulting in the abrupt decision to suspend funding commitment decision letters and liquidate investments. In response to these events, in December 2004, Congress passed a bill granting the Universal Service Fund a one-year exemption from the Antideficiency Act. Nevertheless, FCC’s conclusion on this issue was correct: Absent a statutory exemption, the Universal Service Fund is subject to the Antideficiency Act, and its funding commitment decision letters constitute obligations for purposes of the act. The Antideficiency Act applies to “official or employee of the United States Government . . . mak or authorizing an expenditure or obligation . . . from an appropriation or fund.” 31 U.S.C. § 1341(a). As discussed above, the Universal Service Fund is an “appropriation or fund.” Even though USAC—a private entity whose employees are not federal officers or employees—is the administrator of the program and the entity that obligates and disburses money from the fund, application of the act is not negated. This is because, as recognized by FCC, it, and not USAC, is the entity that is legally responsible for the management and oversight of the E-rate program and because FCC’s employees are federal officers and employees of the United States subject to the Antideficiency Act. Thus, the Universal Service Fund will again be subject to the Antideficiency Act when the one-year statutory exemption expires, unless action is taken to extend or make permanent the exemption. An important issue that arises from the application of the Antideficiency Act to the Universal Service Fund is what actions constitute obligations chargeable against the fund. Under the Antideficiency Act, an agency may not incur an obligation in excess of the amount available to it in an appropriation or fund. Thus, proper recording of obligations with respect to the timing and amount of such obligations permits compliance with the Antideficiency Act by ensuring that agencies have adequate budget authority to cover all of their obligations. Our decisions have defined an “obligation” as a commitment creating a legal liability of the government, including a “legal duty . . . which could mature into a liability by virtue of actions on the part of the other party beyond the control of the United States. . . .” With respect to the Universal Service Fund, the funding commitment decision letter provides the school or library with the authority to obtain services from a provider with the commitment that the school or library will receive a discount and the service provider will be paid for the discounted portion with E-rate funding. Although the school or library could decide not to seek the services or the discount, so long as the funding commitment decision letter remains valid and outstanding, USAC and FCC no longer control the Universal Service Fund’s liability; it is dependent on the actions taken by the school or library. Consequently, we agree with FCC that a recordable obligation is incurred at the time of issuance of the funding commitment decision letter indicating approval of the applicant’s discount. Additional issues that remain to be resolved by FCC include whether other actions taken in the Universal Service Fund program constitute obligations and the timing and amounts of obligations that must be recorded. For example, this includes the projections and data submissions by USAC to FCC and by participants in the High Cost and Low Income support mechanisms to USAC. FCC has indicated that it is considering this issue and consulting with the Office of Management and Budget. FCC should also identify any other actions that may constitute recordable obligations and ensure that those are properly recorded. While we agree with FCC’s determinations that the Universal Service Fund is a permanent appropriation subject to the Antideficiency Act and that its funding commitment decision letters constitute recordable obligations of the Universal Service Fund (see app. I), there are several significant fiscal law issues that remain unresolved. We believe that where FCC has determined that fiscal controls and policies do not apply, the commission should reconsider these determinations in light of the status of universal service monies as federal funds. For example, in view of its determination that the fund constitutes an appropriation, FCC needs to reconsider the applicability of the Miscellaneous Receipts Statue, 31 U.S.C. § 3302, which requires that money received for the use of the United States be deposited in the Treasury unless otherwise authorized by law. FCC also needs to assess the applicability of other fiscal control and accountability statutes (e.g., the Single Audit Act and the Cash Management Improvement Act). Another major issue that remains to be resolved involves the extent to which FCC has delegated some functions for the E-rate program to USAC. For example, are the disbursement policies and practices for the E-rate program consistent with statutory and regulatory requirements for the disbursement of public funds? Are some of the functions carried out by USAC, even though they have been characterized as administrative or ministerial, arguably inherently governmental activities that must be performed by government personnel? Resolving these issues in a comprehensive fashion, rather than continuing to rely on reactive, case-by- case determinations, is key to ensuring that FCC establishes the proper foundation of government accountability standards and safeguards for the E-rate program and the Universal Service Fund. federal financial and reporting statutes” (emphasis added) and “relevant portions of the Federal Financial Management Improvement Act of 1996,” but did not specify which specific statutes or the relevant portions or further analyze their applicability. FCC officials also told us that it was uncertain whether procurement requirements such as the Federal Acquisition Regulation (FAR) applied to arrangements between FCC and USAC, but they recommended that those requirements be followed as a matter of policy. investment, and monitoring of program funds offers models for improving the operation of the Universal Service Fund. We believe that NAPA’s study will go a long way toward addressing the concerns outlined in our report, and we look forward to seeing the results of NAPA’s efforts. Given this important ongoing study and the unresolved issues mentioned previously, Congress may wish to consider deferring a decision on permanently exempting the Universal Service Fund from the Antideficiency Act at this time and instead consider either granting the fund a two- or three-year exemption from the Antideficiency Act or crafting a limited exemption that would provide management flexibility. For example, Congress could specify that FCC could use certain receivables or assets as budgetary resources. These more limited solutions would allow time for the National Academy of Public Administration to complete its study of the Universal Service Fund program and report its findings to FCC. Congress and FCC could then comprehensively assess, based on decisions concerning the structure of the program, which federal requirements, policies, and practices should apply to the fund and to any entities administering the program. It could then be determined whether a permanent and complete exemption from the Antideficiency Act is warranted. Although $13 billion in E-rate funding has been committed to beneficiaries during the past 7 years, FCC did not develop useful performance goals and measures to assess the specific impact of these funds on schools’ and libraries’ Internet access and to improve the management of the program, despite a recommendation by us in 1998 to do so. At the time of our current review, FCC staff was considering, but had not yet finalized, new E-rate goals and measures in response to OMB’s concerns about this deficiency in a 2003 OMB assessment of the program. One of the management tasks facing FCC is to establish strategic goals for the E-rate program, as well as annual goals linked to them. The Telecommunications Act of 1996 did not include specific goals for supporting schools and libraries, but instead used general language directing FCC to establish competitively neutral rules for enhancing access to advanced telecommunications and information services for all public and nonprofit private elementary and secondary school classrooms and libraries. As the agency accountable for the E-rate program, FCC is responsible under the Government Performance and Results Act of 1993 (Results Act) for establishing the program’s long-term strategic goals and annual goals, measuring its own performance in meeting these goals, and reporting publicly on how well it is doing. For fiscal years 2000 through 2002, FCC’s goals focused on achieving certain percentage levels of Internet connectivity during a given fiscal year for schools, public school instructional classrooms, and libraries. However, the data that FCC used to report on its progress was limited to public schools (thereby excluding two other major groups of beneficiaries—private schools and libraries) and did not isolate the impact of E-rate funding from other sources of funding, such as state and local government. This is a significant measurement problem because, over the years, the demand for internal connections funding by applicants has exceeded the E-rate funds available for this purpose by billions of dollars. Unsuccessful applicants had to rely on other sources of support to meet their internal connection needs. Even with these E-rate funding limitations, there has been significant growth in Internet access for public schools since the program issued its first funding commitments in late 1998. At the time, according to data from the Department of Education’s National Center for Educational Statistics (NCES), 89 percent of all public schools and 51 percent of public school instructional classrooms already had Internet access. By 2002, 99 percent of public schools and 92 percent of public school instructional classrooms had Internet access. Yet although billions of dollars in E-rate funds have been committed since 1998, adequate program data was not developed to answer a fundamental performance question: How much of the increase since 1998 in public schools’ Internet access has been a result of the E-rate program, as opposed to other sources of federal, state, local, and private funding? Performance goals and measures are used not only to assess a program’s impact but also to develop strategies for resolving mission-critical management problems. However, management-oriented goals have not been a feature of FCC’s performance plans, despite long-standing concerns about the program’s effectiveness in key areas. For example, two such goals—related to assessing how well the program’s competitive bidding process was working and increasing program participation by low- income and rural school districts and rural libraries—were planned but not carried forward. FCC did not include any E-rate goals for fiscal years 2003 and 2004 in its recent annual performance reports. The failure to measure effectively the program’s impact on public and private schools and libraries over the past 7 years undercuts one of the fundamental purposes of the Results Act: to have federal agencies adopt a fact-based, businesslike framework for program management and accountability. The problem is not just a lack of data for accurately characterizing program results in terms of increasing Internet access. Other basic questions about the E-rate program also become more difficult to address, such as the program’s efficiency and cost-effectiveness in supporting the telecommunications needs of schools and libraries. For example, a review of the program by OMB in 2003 concluded that there was no way to tell whether the program has resulted in the cost-effective deployment and use of advanced telecommunications services for schools and libraries. OMB also noted that there was little oversight to ensure that the program beneficiaries were using the funding appropriately and effectively. In response to these concerns, FCC staff have been working on developing new performance goals and measures for the E-rate program and plan to finalize them and seek OMB approval in fiscal year 2005. FCC testified before Congress in June 2004 that it relies on three chief components in overseeing the E-rate program: rulemaking proceedings, beneficiary audits, and fact-specific adjudicatory decisions (i.e., appeals decisions). We found weaknesses with FCC’s implementation of each of these mechanisms, limiting the effectiveness of FCC’s oversight of the program and the enforcement of program procedures to guard against waste, fraud, and abuse of E-rate funding. As part of its oversight of the E-rate program, FCC is responsible for establishing new rules and policies for the program or making changes to existing rules, as well as providing the detailed guidance that USAC requires to effectively administer the program. FCC carries out this responsibility through its rulemaking process. FCC’s E-rate rulemakings, however, have often been broadly worded and lacking specificity. Thus, USAC has needed to craft the more detailed administrative procedures necessary to implement the rules. However, in crafting administrative procedures, USAC is strictly prohibited under FCC rules from making policy, interpreting unclear provisions of the statute or rules, or interpreting the intent of Congress. We were told by FCC and USAC officials that USAC does not put procedures in place without some level of FCC approval. We were also told that this approval is sometimes informal, such as e-mail exchanges or telephone conversations between FCC and USAC staff. This approval can come in more formal ways as well, such as when the commission expressly endorses USAC operating procedures in commission orders or codifies USAC procedures into FCC’s rules. However, two problems have arisen with USAC administrative procedures. First, although USAC is prohibited under FCC rules from making policy, some USAC procedures deal with more than just ministerial details and arguably rise to the level of policy decisions. For example, in June 2004, USAC was able to identify at least a dozen administrative procedures that, if violated by the applicant, would lead to complete or partial denial of the funding request even though there was no precisely corresponding FCC rule. The critical nature of USAC’s administrative procedures is further illustrated by FCC’s repeated codification of them throughout the history of the program. FCC’s codification of USAC procedures—after those procedures have been put in place and applied to program participants— raises concerns about whether these procedures are more than ministerial and are, in fact, policy changes that should be coming from FCC in the first place. Moreover, in its August 2004 order (in a section dealing with the resolution of audit findings), the commission directs USAC to annually “identify any USAC administrative procedures that should be codified in our rules to facilitate program oversight.” This process begs the question of which entity is really establishing the rules of the E-rate program and raises concerns about the depth of involvement by FCC staff with the management of the program. Second, even though USAC procedures are issued with some degree of FCC approval, enforcement problems could arise when audits uncover violations of USAC procedures by beneficiaries or service providers. The FCC IG has expressed concern over situations where USAC administrative procedures have not been formally codified because commission staff have stated that, in such situations, there is generally no legal basis to recover funds from applicants that failed to comply with the USAC procedures. In its August 2004 order, the commission attempted to clarify the rules of the program with relation to recovery of funds. However, even under the August 2004 order, the commission did not clearly address the treatment of beneficiaries who violate a USAC administrative procedure that has not been codified. FCC’s use of beneficiary audits as an oversight mechanism has also had weaknesses, although FCC and USAC are now working to address some of these weaknesses. Since 2000, there have been 122 beneficiary audits conducted by outside firms, 57 by USAC staff, and 14 by the FCC IG (2 of which were performed under agreement with the Inspector General of the Department of the Interior). Beneficiary audits are the most robust mechanism available to the commission in the oversight of the E-rate program, yet FCC generally has been slow to respond to audit findings and has not made full use of the audit findings as a means to understand and resolve problems within the program. First, audit findings can indicate that a beneficiary or service provider has violated existing E-rate program rules. In these cases, USAC or FCC can seek recovery of E-rate funds, if justified. In the FCC IG’s May 2004 Semiannual Report, however, the IG observes that audit findings are not being addressed in a timely manner and that, as a result, timely action is not being taken to recover inappropriately disbursed funds. The IG notes that in some cases the delay is caused by USAC and, in other cases, the delay is caused because USAC is not receiving timely guidance from the commission (USAC must seek guidance from the commission when an audit finding is not a clear violation of an FCC rule or when policy questions are raised). Regardless, the recovery of inappropriately disbursed funds is important to the integrity of the program and needs to occur in a timely fashion. Second, under GAO’s Standards for Internal Controls in the Federal Government, agencies are responsible for promptly reviewing and evaluating findings from audits, including taking action to correct a deficiency or taking advantage of the opportunity for improvement. Thus, if an audit shows a problem but no actual rule violation, FCC should be examining why the problem arose and determining if a rule change is needed to address the problem (or perhaps simply addressing the problem through a clarification to applicant instructions or forms). FCC has been slow, however, to use audit findings to make programmatic changes. For example, several important audit findings from the 1998 program year were only recently resolved by an FCC rulemaking in August 2004. In its August 2004 order, the commission concluded that a standardized, uniform process for resolving audit findings was necessary, and directed USAC to submit to FCC a proposal for resolving audit findings. FCC also instructed USAC to specify deadlines in its proposal “to ensure audit findings are resolved in a timely manner.” USAC submitted its Proposed Audit Resolution Plan to FCC on October 28, 2004. The plan memorializes much of the current audit process and provides deadlines for the various stages of the audit process. FCC released the proposed audit plan for public comment in December 2004. In addition to the Proposed Audit Resolution Plan, the commission instructed USAC to submit a report to FCC on a semiannual basis summarizing the status of all outstanding audit findings. The commission also stated that it expects USAC to identify for commission consideration on at least an annual basis all audit findings raising management concerns that are not addressed by existing FCC rules. Lastly, the commission took the unusual step of providing a limited delegation to the Wireline Competition Bureau (the bureau within FCC with the greatest share of the responsibility for managing the E-rate program) to address audit findings and to act on requests for waiver of rules warranting recovery of funds. These actions could help ensure, on a prospective basis, that audit findings are more thoroughly and quickly addressed. However, much still depends on timely action being taken by FCC, particularly if audit findings suggest the need for a rulemaking. In addition to problems with responding to audit findings, the audits conducted to date have been of limited use because neither FCC nor USAC have conducted an audit effort using a statistical approach that would allow them to project the audit results to all E-rate beneficiaries. Thus, at present, no one involved with the E-rate program has a basis for making a definitive statement about the amount of waste, fraud, and abuse in the program. Of the various groups of beneficiary audits conducted to date, all were of insufficient size and design to analyze the amount of fraud or waste in the program or the number of times that any particular problem might be occurring programwide. At the time we concluded our review, FCC and USAC were in the process of soliciting and reviewing responses to a Request for Proposal for audit services to conduct additional beneficiary audits. Under FCC’s rules, program participants can seek review of USAC’s decisions, although FCC’s appeals process for the E-rate program has been slow in some cases. Because appeals decisions are used as precedent, this slowness adds uncertainty to the program and impacts beneficiaries. FCC rules state that FCC is to decide appeals within 90 days, although FCC can extend this period. At the time of our review there was a substantial appeals backlog at FCC (i.e., appeals pending for longer than 90 days). Out of 1,865 appeals to FCC from 1998 through the end of 2004, approximately 527 appeals remain undecided, of which 458 (25 percent) are backlog appeals. We were told by FCC officials that some of the backlog is due to staffing issues. FCC officials said they do not have enough staff to handle appeals in a timely manner. FCC officials also noted that there has been frequent staff turnover within the E-rate program, which adds some delay to appeals decisions because new staff necessarily take time to learn about the program and the issues. Additionally, we were told that another factor contributing to the backlog is that the appeals have become more complicated as the program has matured. Lastly, some appeals may be tied up if the issue is currently in the rulemaking process. The appeals backlog is of particular concern given that the E-rate program is a technology program. An applicant who appeals a funding denial and works through the process to achieve a reversal and funding two years later might have ultimately won funding for outdated technology. FCC officials told us that they are working to resolve all backlogged E-rate appeals by the end of calendar year 2005. We conducted our work from December 2003 through December 2004 in accordance with generally accepted government auditing standards. We interviewed officials from FCC’s Wireline Competition Bureau, Enforcement Bureau, Office of General Counsel, Office of Managing Director, Office of Strategic Planning and Policy Analysis, and Office of Inspector General. We also interviewed officials from USAC. In addition, we interviewed officials from OMB and the Department of Education regarding performance goals and measures. OMB had conducted its own assessment of the E-rate program in 2003, which we also discussed with OMB officials. We reviewed and analyzed FCC, USAC, and OMB documents related to the management and oversight of the E-rate program. The information we gathered was sufficiently reliable for the purposes of our review. See our full report for a more detailed explanation of our scope and methodology. This concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Committee may have. For further information about this testimony, please contact me at (202) 512-2834. Edda Emmanuelli-Perez, John Finedore, Faye Morrison, and Mindi Weisenbloom also made key contributions to this statement. There have been questions from the start of the E-rate program regarding the nature of the Universal Service Fund (USF) and the applicability of managerial, fiscal, and financial accountability requirements to USF. FCC has never clearly determined the nature of USF, and the Office of Management and Budget (OMB), the Congressional Budget Office (CBO), and GAO have at various times noted that USF has not been recognized or treated as federal funds for several purposes. However, FCC has never confronted or assessed these issues in a comprehensive fashion and has only recently begun to address a few of these issues. In particular, FCC has recently concluded that as a permanent indefinite appropriation, USF is subject to the Antideficiency Act and its funding commitment decision letters constitute obligations for purposes of the Antideficiency Act. As explained below, we agree with FCC’s determination. However, FCC’s conclusions concerning the status of USF raise further issues related to the collection, deposit, obligation, and disbursement of those funds— issues that FCC needs to explore and resolve. Universal service has been a basic goal of telecommunications regulation since the 1950s, when FCC focused on increasing the availability of reasonably priced, basic telephone service. See Texas Office of Public Utility Counsel v. FCC, 183 F.3d 393, 405-406 (5th Cir., 1999), cert. denied sub nom; Celpage Inc. v. FCC, 530 U.S. 1210 (2000). FCC has not relied solely on market forces, but has used a combination of explicit and implicit subsidies to achieve this goal. Id. Prior to 1983, FCC used the regulation of AT&T’s internal rate structure to garner funds to support universal service. With the breakup of AT&T in 1983, FCC established a Universal Service Fund administered by the National Exchange Carrier Association (NECA). NECA is an association of incumbent local telephone companies, also established at the direction of the FCC. Among other things, NECA was to administer universal service through interstate access tariffs and the revenue distribution process for the nation’s local telephone companies. At that time, NECA, a nongovernmental entity, privately maintained the Universal Service Fund outside the U.S. Treasury. Section 254 of the Telecommunications Act of 1996 codified the concept of universal service and expanded it to include support for acquisition by schools and libraries of telecommunications and Internet services. Pub. L. No. 104-104, § 254, 110 Stat. 56 (1996) (classified at 47 U.S.C. § 254). The act defines universal service, generally, as a level of telecommunications services that FCC establishes periodically after taking into account various considerations, including the extent to which telecommunications services are essential to education, public health, and public safety. 47 U.S.C. § 254 (c)(1). The act also requires that “every telecommunications carrier that provides interstate telecommunications services shall contribute . . . to the specific, predictable, and sufficient mechanisms” established by FCC “to preserve and advance universal service.” Id., §254 (d). The act did not specify how FCC was to administer the E-rate program, but required FCC, acting on the recommendations of the Federal-State Joint Board, to define universal service and develop specific, predictable, and equitable support mechanisms. FCC designated the Universal Services Administrative Company (USAC), a nonprofit corporation that is a wholly owned subsidiary of NECA, as the administrator of the universal service mechanisms. USAC administers the program pursuant to FCC orders, rules, and directives. As part of its duties, USAC collects the carriers’ universal service contributions, which constitute the Universal Service Fund, and deposits them to a private bank account under USAC’s control and in USAC’s name. FCC has directed the use of USF to, among other things, subsidize advanced telecommunications services for schools and libraries in a program commonly referred to as the E-rate program. Pursuant to the E-rate program, eligible schools and libraries can apply annually to receive support and can spend the funding on specific eligible services and equipment, including telephone services, Internet access services, and the installation of internal wiring and other related items. Generally, FCC orders, rules, and directives, as well as procedures developed by USAC, establish the program’s criteria. USAC carries out the program’s day-to- day operations, such as answering inquiries from schools and libraries; processing and reviewing applications; making funding commitment decisions and issuing funding commitment decision letters; and collecting, managing, investing, and disbursing E-rate funds. Eligible schools and libraries may apply annually to receive E-rate support. The program places schools and libraries into various discount categories, based on indicators of need. As a result of the application of the discount rate to the cost of the service, the school or library pays a percentage of the cost for the service and the E-rate program covers the remainder. E- rate discounts range from 20 percent to 90 percent. Once the school or library has complied with the program’s requirements and entered into agreements with vendors for eligible services, the school or library must file a form with USAC noting the types and costs of the services being contracted for, the vendors providing the services, and the amount of discount being requested. USAC reviews the forms and issues funding commitment decision letters. The funding commitment decision letters notify the applicants of the decisions regarding their E-rate discounts. These funding commitment decision letters also notify the applicants that USAC will send the information on the approved E-rate discounts to the providers so that “preparations can be made to begin implementing . . . E-rate discount(s) upon the filing of . . . Form 486.” The applicant files FCC Form 486 to notify USAC that services have started and USAC can pay service provider invoices. Generally, the service provider seeks reimbursement from USAC for the discounted portion of the service, although the school or library also could pay the service provider in full and then seek reimbursement from USAC for the discount portion. What Is the Universal Service Fund? The precise phrasing of the questions regarding the nature of USF has varied over the years, including asking whether they are federal funds, appropriated funds, or public funds and, if so, for what purposes? While the various fiscal statutes may use these different terms to describe the status of funds, we think the fundamental issue is what statutory controls involving the collection, deposit, obligation, and disbursement of funds apply to USF. As explained below, funds that are appropriated funds are subject, unless specifically exempted by law, to a variety of statutory provisions providing a scheme of funds controls. See B-257525, Nov. 30, 1994; 63 Comp. Gen. 31 (1983); 35 Comp. Gen. 436 (1956); B-204078.2, May 6, 1988. On the other hand, funds that are not appropriated funds are not subject to such controls unless the law specifically applies such controls. Thus, we believe the initial question is whether USF funds are appropriated funds. FCC has concluded that USF constitutes a permanent indefinite appropriation. We agree with FCC’s conclusion. Typical language of appropriation identifies a fund or account as an appropriation and authorizes an agency to enter into obligations and make disbursements out of available funds. For example, Congress utilizes such language in the annual appropriations acts. See 1 U.S.C. § 105 (requiring regular annual appropriations acts to bear the title “An Act making appropriations. . .”). Congress, however, appropriates funds in a variety of ways other than in regular annual appropriation acts. Indeed, our decisions and those of the courts so recognize. Thus, a statute that contains a specific direction to pay, and a designation of funds to be used, constitutes an appropriation. 63 Comp. Gen. 331 (1984); 13 Comp. Gen. 77 (1933). In these statutes, Congress (1) authorizes the collection of fees and their deposit into a particular fund, and (2) makes the fund available for expenditure for a specified purpose without further action by Congress. This authority to obligate or expend collections without further congressional action constitutes a continuing appropriation or a permanent appropriation of the collections. E.g., United Biscuit Co. v. Wirtz, 359 F.2d 206, 212 (D.C. Cir. 1965), cert. denied, 384 U.S. 971 (1966); 69 Comp. Gen. 260, 262 (1990); 73 Comp. Gen. 321 (1994). Our decisions are replete with examples of permanent appropriations, such as revolving funds and various special deposit funds, including mobile home inspection fees collected by the Secretary of Housing and Urban Development, licensing revenues received by the Commission on the Bicentennial, tolls and other receipts deposited in the Panama Canal Revolving Fund, user fees collected by the Saint Lawrence Seaway Development Corporation, user fees collected from tobacco producers to provide tobacco inspection, certification and other services, and user fees collected from firms using the Department of Agriculture’s meat grading services. It is not essential for Congress to expressly designate a fund as an appropriation or to use literal language of “appropriation,” so long as Congress authorizes the expenditure of fees or receipts collected and deposited to a specific account or fund. In cases where Congress does not intend these types of collections or funds to be considered “appropriated funds,” it explicitly states that in law. See e.g., 12 U.S.C. § 244 (the Federal Reserve Board levies assessments on its member banks to pay for its expenses and “funds derived from such assessments shall not be construed to be government funds or appropriated moneys”); 12 U.S.C. § 1422b(c) (the Office of Federal Housing Enterprise Oversight levies assessments upon the Federal Home Loan Banks and from other sources to pay its expenses, but such funds “shall not be construed to be government funds or appropriated monies, or subject to apportionment for the purposes of chapter 15 of title 31, or any other authority”). Like the above examples, USF’s current authority stems from a statutorily authorized collection of fees from telecommunication carriers, and expenditures for a specified purpose—that is, the various types of universal service. Thus, USF meets both elements of the definition of a permanent appropriation. We recognize that prior to the passage of the Telecommunications Act of 1996, there existed an administratively sanctioned universal service fund. With the Telecommunications Act of 1996, Congress specifically expanded the contribution base of the fund, statutorily mandated contributions into the fund, and designated the purposes for which the monies could be expended. These congressional actions established USF in a manner that meets the elements for a permanent appropriation and Congress did not specify that USF should be considered anything other than an appropriation. Does the Antideficiency Act Apply to USF? Appropriated funds are subject to a variety of statutory controls and restrictions. These controls and restrictions, among other things, limit the purposes for which they may be used and provide a scheme of funds control. See e.g., 63 Comp. Gen. 110 (1983); B-257525, Nov. 30, 1994; B- 228777, Aug. 26, 1988; B-223857, Feb. 27, 1987; 35 Comp. Gen. 436 (1956). A key component of this scheme of funds control is the Antideficiency Act. B-223857, Feb. 27, 1987. The Antideficiency Act has been termed “the cornerstone of congressional efforts to bind the executive branch of government to the limits on expenditure of appropriated funds.” Primarily, the purpose of the Antideficiency Act is to prevent the obligation and expenditure of funds in excess of the amounts available in an appropriation or in advance of the appropriation of funds. 31 U.S.C. § 1341(a)(1). FCC has determined that the Antideficiency Act applies to USF, and as explained below, we agree with FCC’s conclusion. The Antideficiency Act applies to “officer or employee of the United States Government . . . mak or authoriz an expenditure or obligation . . . from an appropriation or fund.” 31 U.S.C. § 1341(a). As established above, USF is an “appropriation or fund.” The fact that USAC, a private entity whose employees are not federal officers or employees, is the administrator of the E-rate program and obligates and disburses funds from USF is not dispositive of the application of the Antideficiency Act. This is because, as the FCC recognizes, it, not USAC, is the entity that is legally responsible for the management and oversight of the E-rate program and FCC’s employees are federal officers and employees of the United States subject to the Antideficiency Act. Where entities operate with funds that are regarded as appropriated funds, such as some government corporations, they, too, are subject to the Antideficiency Act. See e.g., B-223857, Feb. 27, 1987 (funds available to Commodity Credit Corporation pursuant to borrowing authority are subject to Antideficiency Act); B-135075-O.M., Feb. 14, 1975 (Inter- American Foundation). The Antideficiency Act applies to permanent appropriations such as revolving funds and special funds. 72 Comp. Gen. 59 (1992) (Corps of Engineers Civil Works Revolving Fund subject to Antideficiency Act); B-120480, Sep. 6, 1967, B-247348, June 22, 1992, and B- 260606, July 25, 1997 (GPO revolving funds subject to Antideficiency Act); 71 Comp. Gen. 224 (1992) (special fund that receives fees, reimbursements, and advances for services available to finance its operations is subject to Antideficiency Act). Where Congress intends for appropriated funds to be exempt from the application of statutory controls on the use of appropriations, including the Antideficiency Act, it does so expressly. See e.g., B-193573, Jan. 8, 1979; B-193573, Dec. 19, 1979; B-217578, Oct. 16, 1986 (Saint Lawrence Seaway Development Corporation has express statutory authority to determine the character and necessity of its obligations and is therefore exempt from many of the restrictions on the use of appropriated funds that would otherwise apply); B-197742, Aug. 1, 1986 (Price-Anderson Act expressly exempts the Nuclear Regulatory Commission from Antideficiency Act prohibition against obligations or expenditures in advance or in excess of appropriations). There is no such exemption for FCC or USF from the prohibitions of the Antideficiency Act. Thus, USF is subject to the Antideficiency Act. Do the Funding Commitment Decision Letters Issued to Schools and Libraries Constitute Obligations? An important issue that arises from the application of the Antideficiency Act to USF is what actions constitute obligations chargeable against the fund. Understanding the concept of an obligation and properly recording obligations are important because an obligation serves as the basis for the scheme of funds control that Congress envisioned when it enacted fiscal laws such as the Antideficiency Act. B-300480, Apr. 9, 2003. For USF’s schools and libraries program, one of the main questions is whether the funding commitment decision letters issued to schools and libraries are properly regarded as obligations. FCC has determined that funding commitment decision letters constitute obligations. And again, as explained below, we agree with FCC’s determination. Under the Antideficiency Act, an agency may not incur an obligation in excess of the amount available to it in an appropriation or fund. 31 U.S.C. § 1341(a). Thus, proper recording of obligations with respect to the timing and amount of such obligations permits compliance with the Antideficiency Act by ensuring that agencies have adequate budget authority to cover all of their obligations. B-300480, Apr. 9, 2003. We have defined an “obligation” as a “definite commitment that creates a legal liability of the government for the payment of goods and services ordered or received.” Id. A legal liability is generally any duty, obligation or responsibility established by a statute, regulation, or court decision, or where the agency has agreed to assume responsibility in an interagency agreement, settlement agreement or similar legally binding document. Id. citing to Black’s Law Dictionary 925 (7th ed. 1999). The definition of “obligation” also extends to “ legal duty on the part of the United States which constitutes a legal liability or which could mature into a legal liability by virtue of actions on the part of the other party beyond the control of the United States. . . .” Id. citing to 42 Comp. Gen. 733 (1963); see also McDonnell Douglas Corp. v. United States, 37 Fed. Cl. 295, 301 (1997). The funding commitment decision letters provided to applicant schools and libraries notify them of the decisions regarding their E-rate discounts. In other words, it notifies them whether their funding is approved and in what amounts. The funding commitment decision letters also notify schools and libraries that the information on the approved E-rate discounts is sent to the providers so that “preparations can be made to begin implementing . . . E-rate discount(s) upon the filing of . . . Form 486.” The applicant files FCC Form 486 to notify USAC that services have started and USAC can pay service provider invoices. At the time a school or library receives a funding commitment decision letter, the FCC has taken an action that accepts a “legal duty . . . which could mature into a legal liability by virtue of actions on the part of the grantee beyond the control of the United States.” Id. citing 42 Comp. Gen. 733, 734 (1963). In this instance, the funding commitment decision letter provides the school or library with the authority to obtain services from a provider with the commitment that it will receive a discount and the provider will be reimbursed for the discount provided. While the school or library could decide not to seek the services or the discount, so long as the funding commitment decision letter remains valid and outstanding, USAC and FCC no longer control USF’s liability; it is dependent on the actions taken by the other party—that is, the school or library. In our view, a recordable USF obligation is incurred at the time of issuance of the funding commitment decision letter indicating approval of the applicant’s discount. Thus, these obligations should be recorded in the amounts approved by the funding commitment decision letters. If at a later date, a particular applicant uses an amount less than the maximum or rejects funding, then the obligation amount can be adjusted or deobligated, respectively. Additional issues that remain to be resolved by FCC include whether other actions taken in the universal service program constitute obligations and the timing of and amounts of obligations that must be recorded. For example, this includes the projections and data submissions by USAC to FCC and by participants in the High Cost and Low Income Support Mechanisms to USAC. FCC has indicated that it is considering this issue and consulting with the Office of Management and Budget. FCC should also identify any other actions that may constitute recordable obligations and ensure those are properly recorded. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since 1998, the Federal Communications Commission's (FCC) E-rate program has committed more than $13 billion to help schools and libraries acquire Internet and telecommunications services. As steward of the program, FCC must ensure that participants use E-rate funds appropriately and that there is managerial and financial accountability surrounding the funds. This testimony is based on GAO's February 2005 report GAO-05-151 , which reviewed (1) the effect of the current structure of the E-rate program on FCC's management of the program, including the applicability of the Antideficiency Act, (2) FCC's development and use of E-rate performance goals and measures, and (3) the effectiveness of FCC's program oversight mechanisms. FCC established E-rate as a multibillion-dollar program operating under an organizational structure unusual to the federal government, but never conducted a comprehensive assessment to determine which federal requirements, policies, and practices apply to the program, to the Universal Service Administrative Company, and to the Universal Service Fund itself. FCC has addressed these issues on a case-by-case basis, but this has put FCC and the E-rate program in the position of reacting to problems as they occur rather than setting up an organization and internal controls designed to ensure compliance with applicable laws. With regard to the Antideficiency Act, we agree with FCC's conclusions that the Universal Service Fund is a permanent indefinite appropriation, is subject to that act, and that the issuance of E-rate funding commitment letters constitutes obligations for purposes of the act. We believe that Congress should consider either granting the Universal Service Fund a two- or three-year exemption from the Antideficiency Act or crafting a limited exemption that would provide management flexibility. For example, Congress could specify that FCC could use certain receivables or assets as budgetary resources. These more limited solutions would allow time for the National Academy of Public Administration to complete its study of the Universal Service Fund program and report its findings to FCC. Congress and FCC could then comprehensively assess, based on decisions concerning the structure of the program, which federal requirements, policies, and practices should apply to the fund and to any entities administering the program. It could then be determined whether a permanent and complete exemption from the Antideficiency Act is warranted. FCC has not developed useful performance goals and measures for assessing and managing the E-rate program. The goals established for fiscal years 2000 through 2002 focused on the percentage of public schools connected to the Internet, but the data used to measure performance did not isolate the impact of E-rate funding from other sources of funding, such as state and local government. In its 2003 assessment of the program, OMB concluded that there was no way to tell whether the program has resulted in the cost-effective deployment and use of advanced telecommunications services. In response, FCC is working with OMB on developing new E-rate measures. According to FCC officials, oversight of the program is primarily handled through agency rulemaking procedures, beneficiary audits, and appeals decisions. FCC's rulemakings, however, have often lacked specificity, which has affected the recovery of funds for program violations. FCC has also been slow to respond to beneficiary audit findings and make full use of them to strengthen the program. In addition, the small number of these audits completed to date do not provide a basis for accurately assessing the level of fraud, waste, and abuse occurring in the program. According to FCC officials, there is also a substantial backlog of E-rate appeals.
AFWCF relies on sales revenue rather than regular appropriations to finance its continuing operations. AFWCF is intended to (1) generate sufficient resources to cover the full costs of its operations and (2) operate on a break-even basis over time—that is, neither make a gain nor incur a loss. Customers use appropriated funds, primarily operations and maintenance appropriations, to finance orders placed with AFWCF. AFWCF includes a maintenance division that provides the Air Force with the in-house industrial capability to repair and overhaul a wide range of weapon systems and military equipment. The Air Force maintains three ALCs which are designed to retain, at a minimum, a ready, controlled source of technical competence and resources to meet military requirements. Table 1 describes the locations and principal work for each ALC. Carryover is the reported dollar value of work that has been ordered and funded (obligated) by customers but not completed by working capital fund activities at the end of the fiscal year. Carryover consists of both the unfinished portion of work started but not completed, as well as work that has not yet begun. Some carryover is necessary at the end of the fiscal year if working capital funds are to operate efficiently and effectively. For example, if customers do not receive new appropriations at the beginning of the fiscal year, carryover is necessary to ensure that working capital fund activities have enough work to ensure a smooth transition between fiscal years. Too little carryover could result in some personnel not having work to perform at the beginning of the fiscal year. On the other hand, too much carryover could result in an activity group receiving funds from customers in one fiscal year but not performing the work until well into the next fiscal year. By minimizing the amount of carryover, DOD can use its resources in the most effective manner and minimize the backlog of work and “banking” of related funding for work and programs to be performed in subsequent years. DOD’s carryover policy is included in DOD Financial Management Regulation 7000.14-R, volume 2B, chapter 9. Under the policy, the allowable amount of carryover is based on the amount of new orders received in a given year and the outlay rate of the customers’ appropriations financing the work. For example, the Air Force depots received about $1.4 billion in new orders funded with operation and maintenance, Air Force appropriation—one of many appropriations funding orders received in fiscal year 2010. The DOD outlay rate for this appropriation was 66 percent. Therefore, the amount of funds the AFWCF was allowed to carry over into fiscal year 2011 was $476 million ($1.4 billion multiplied by 34 percent, which represents 1 minus the outlay rate for the underlying appropriation). The DOD carryover policy provides that the work on these fiscal year 2010 orders is expected to be completed by the end of fiscal year 2011, and therefore, carryover is only allowed for the first year. According to the DOD regulation, this carryover metric allows for an analytical-based approach that holds working capital fund activities to the same standard as general fund execution and allows for meaningful budget execution analysis. In accordance with the DOD Financial Management Regulation, (1) nonfederal orders, (2) non-DOD orders, (3) foreign military sales, (4) work related to base realignment and closure, and (5) work-in-progress are excluded from the carryover calculation. The reported actual carryover (net of exclusions) is then compared to the amount of allowable carryover using the above-described outlay-rate method to determine whether the actual amount is over or under the allowable carryover amount. The Air Force made changes to the calculation of carryover that reduced both the carryover and the allowable amount of carryover. These changes included the (1) removal of contract depot maintenance from AFWCF and (2) consolidation of the AFWCF depot maintenance activity group with the material support division of the supply management activity group, which eliminated internal transactions between supply and maintenance. As stated previously, the AFWCF depot maintenance activity group supports combat readiness by providing depot repair services necessary to keep Air Force units operating worldwide. The activity group either performed the work in-house at its three ALCs or through contracts with private industry (referred to as contract depot maintenance). Under the contract depot maintenance process, the activity group accepted customer orders that obligated their funds. The customers used the activity group as their purchasing agent when they needed a contractor to perform depot- level maintenance work. The activity group awarded the contracts and managed the work performed by the contractors. Beginning in fiscal year 2003, the Air Force began transitioning contract depot maintenance out of AFWCF. According to the fiscal year 2010 AFWCF budget, the removal of contract depot maintenance from AFWCF brings the user and provider of contract depot maintenance services closer together and removes the working capital fund from its current role as the “middleman.” The action allows depot managers to dedicate time and effort to in-house production. AFWCF stopped accepting new contract depot maintenance orders at the end of fiscal year 2008 and at the time of our review, expected to (1) complete work on fiscal year 2008 and prior years’ contract depot maintenance orders and (2) close out all related accounting records by the end of fiscal year 2011. As a result of the change, AFWCF no longer included contract depot maintenance orders in its calculation of the allowable carryover amounts starting in fiscal year 2009. In fiscal year 2009, AFWCF consolidated the depot maintenance activity group with the material support division of the supply management activity group to form a new activity group—Consolidated Sustainment Activity Group—to manage depot-level repairable and consumable spares unique to the Air Force as well as maintenance services. According to the fiscal year 2010 AFWCF budget and Air Force officials, this consolidation eliminated the recording of internal transactions such as orders, revenue, and carryover amounts between depot maintenance and supply within the AFWCF. The elimination of the recording of orders reduced the amount of carryover as well as the allowable amount of carryover since the orders were not included in the dollar amount of work performed. The fiscal year 2011 AFWCF budget indicates that the internal AFWCF transactions were eliminated beginning in fiscal year 2009. As a result, starting in fiscal year 2009, the only transactions affecting AFWCF carryover are orders received from customers that are not part of the AFWCF, called external customers, to perform depot maintenance work. In its budget information, the Air Force consistently underestimated the amount of carryover that would exceed the allowable amount from fiscal year 2006 through fiscal year 2010. In 3 of the 5 years, the actual amount of carryover exceeded the budgeted amount by over $250 million. In fiscal year 2010, Air Force headquarters and Air Force Materiel Command (AFMC) began implementing actions to improve the accuracy of budgeting for AFWCF carryover such as incorporating overseas contingency operations (OCO) funded orders in the fiscal year 2012 AFWCF budget. These actions have the potential to improve the accuracy of budgeting for AFWCF carryover, but their success can be determined only when budgeted carryover information is compared to actual results. From fiscal year 2006 through fiscal year 2010, the AFWCF budget consistently underestimated the amount of carryover that would exceed the allowable amount. Reliable budget information on carryover is critical since decision makers use this information when reviewing the AFWCF budgets. For example, as shown in table 2 below, the fiscal year 2010 AFWCF budget showed that the carryover would exceed the allowable amount by $85 million for fiscal year 2010. Congressional defense committees, relying on this information, reduced the AFWCF fiscal year 2010 customers’ budgets by $85 million. Table 2 shows the amount of budgeted and actual AFWCF carryover that was over or under the allowable amount and the actual amount exceeding the budgeted amount for fiscal years 2006 through 2010. We analyzed the carryover information for 3 years—fiscal years 2006, 2007, and 2009—to determine contributing factors for the differences between budgeted and actual amounts because in these years the budgeted amounts underestimated the actual amounts by the largest amounts. According to Air Force headquarters officials, several factors influenced the differences between budgeted and actual amounts including (1) changes in the outlay rates used to compute the allowable amount of carryover, (2) changes in customer orders, (3) issues affecting production of work performed on external orders such as personnel and parts shortages, and (4) removal of contract depot maintenance from AFWCF. Specific examples of these factors are discussed below. Since the actual outlay rates were higher than the outlay rates used for budgeting for certain appropriations funding orders received by AFWCF, the actual allowable carryover amount was less than the budgeted amount. For example, our analysis of Air Force data determined that the outlay rate used to compute the allowable amount of carryover from customers that were internal to AFWCF changed from 61 percent to 75 percent for fiscal year 2006 between budget and execution. Because the rate increased by 14 percentage points, the allowable amount of carryover was less than the planned amount for fiscal year 2006. The budgets underestimated the amount of new orders that would be received from customers external to AFWCF for fiscal years 2006, 2007, and 2009. For example, the actual new orders exceeded budgeted new orders by $242 million in fiscal year 2009 due to the Air Force not including OCO-funded orders in the AFWCF budget. This contributed to carryover being higher than planned in fiscal year 2009. For fiscal year 2009, the AFWCF encountered several problems that affected production (work performed) and contributed to carryover being higher than planned. Specifically, the Air Force forecasted a declining workload for the ALCs in fiscal year 2009. As a result, the Air Force directed AFMC to reduce its workforce at the ALCs. However, workload increased instead of decreased in fiscal year 2009. Furthermore, work in several areas such as engines, was delayed because the depots could not obtain the spare parts when needed to perform the work. As a result, the ALCs generated less revenue than customer orders received, thus increasing the carryover amount in fiscal year 2009. These issues are discussed in more detail later in the report. The Air Force’s action to remove contract depot maintenance from the AFWCF was delayed by one year after the Air Force developed the fiscal year 2007 AFWCF budget. Because the contract work was not removed in fiscal year 2007 as budgeted, the budgeted fiscal year 2007 carryover information presented in the fiscal year 2007 budget was understated compared to the actual amounts as reported in the fiscal year 2009 budget. In fiscal year 2010, Air Force headquarters and AFMC began implementing actions to improve the accuracy of budgeting for AFWCF carryover. These actions are, in part, in response to a fiscal year 2009 carryover balance that exceeded its plan by about $573 million and an $85 million reduction in the AFWCF fiscal year 2010 budget by the congressional defense committees due to projected excess carryover. First, the Air Force began including OCO-funded orders in the fiscal year 2012 AFWCF budget. Second, in the summer of 2010, the Air Force requested and received from OUSD (Comptroller) an exemption that allowed AFWCF to use an alternative outlay rate for software maintenance workloads when calculating the allowable amount of carryover (discussed in the next section of the report). The Air Force requested the alternative outlay rate for software workload because the work is fully funded upfront but requires years to complete and, in many cases, requires the procurement of hardware from vendors. The Air Force stated that the alternative outlay rate is expected to reduce future variances between budgeted and actual allowable carryover. Third, AFMC is taking several steps aimed at improving workload and budget forecasts. Specifically, in December 2010, the Air Force developed a process which improves the coordination among organizations (systems program office, maintenance wings, and supply personnel) that affect the performance of depot maintenance work. As workload requirements change, this initiative includes an approval process to adjust future budgets and workload estimates. The Air Force anticipates that these changes will improve on-time aircraft and missile performance and reduce variances between budgeted and actual carryover. The success of these actions can be determined only when future AFWCF budgets are analyzed and compared to actual results. Our analysis of AFWCF reports showed that in each year from fiscal year 2006 through fiscal year 2010 actual carryover exceeded the allowable carryover amounts. During the 5-year period, the amount of carryover that exceeded allowable amounts ranged from $4 million to $568 million. The Air Force began increasing the allowable amount of carryover (1) for orders funding software work in fiscal year 2010 and (2) for orders funded with multiyear appropriations in fiscal years 2009 and 2010. Concerning the software work, the Air Force requested the exemption because large software upgrades require full funding upfront and years to complete. In many instances, software development is predicated on procuring hardware that can take many months to obtain. The Air Force requested in writing and received approval in writing from OUSD (Comptroller) an exemption to increase the allowable amount of carryover for software work. Concerning the use of orders funded with multiyear appropriations, the Air Force based this decision on a revision to the carryover-allowance methodology in the DOD Financial Management Regulation. However, the section in this regulation cited by the Air Force pertains only to Army ordnance working capital fund activities which perform a manufacturing function. Furthermore, the Air Force did not request in writing or receive approval in writing from OUSD (Comptroller) an exemption for increasing the allowable amount of carryover for orders funded with multiyear appropriations. Therefore, the Air Force decision to increase the allowable amount of carryover for orders funded with multiyear appropriations was not in accordance with the DOD Financial Management Regulation. Our analysis of the budgets and supporting data showed that AFWCF carryover exceeded its allowable carryover each year for a 5-year period from fiscal year 2006 through fiscal year 2010. The amount of carryover exceeding the allowable amount ranged from $4 million in fiscal year 2006 to $568 million in fiscal year 2009. Table 3 shows AFWCF actual carryover, allowable carryover, and the amount over allowable carryover for fiscal years 2006 through 2010. Since the actual carryover exceeded the allowable by $568 million at the end of fiscal year 2009, Air Force headquarters and AFMC held weekly meetings, beginning in January 2010 to discuss the reduction of carryover. Topics discussed at these meetings included: (1) identifying work that was driving the carryover, (2) hiring additional personnel to perform work that would reduce carryover, (3) identifying problems with the performance of work due to the shortage of parts, and (4) reviewing workloads that had unusual problems. Also, the carryover information was provided bimonthly to the Under Secretary of the Air Force since the carryover data has budget and operational implications. After the carryover exceeded the allowable amount by $568 million at the end of fiscal year 2009, AFMC and the ALCs took a more proactive approach in the budgeting and management of carryover. The ALCs are (1) reviewing and validating the amount of carryover on existing customer orders and (2) reviewing customer orders prior to acceptance to ensure that all project orders contain a specific description of the work and deliverables, and period of performance. Based on their reviews of prior years’ customer orders, the ALCs either deobligated or completed work on $72 million of orders between May and September 2010 which reduced carryover by that amount. For fiscal years 2009 and 2010, the Air Force took $115 million and $229 million, respectively, in additional exemptions that increased the allowable carryover amounts that were not taken in previous years. These exemptions were for (1) orders involving the development of software for weapon systems and test equipment ($104 million in fiscal year 2010) and (2) prior-year orders financed with multiple year funds such as procurement and research, development, test, and evaluation appropriations ($115 million in fiscal year 2009 and $125 million in fiscal year 2010). Concerning the orders for software, the Air Force requested the exemption, in writing, from OUSD (Comptroller) on June 23, 2010, and OUSD (Comptroller) approved the exemption, in writing, on July 12, 2010. The Air Force requested that it use alternative outlay rates for calculating the allowable carryover for software projects based on attributes of the work and historical information. The Air Force requested the exemption because large software upgrades to (1) weapon systems or (2) equipment to test weapon systems or parts, such as avionic parts, requires full funding upfront but requires years to complete. In many instances, software development is predicated on procuring hardware that can take many months to obtain. Furthermore, software work requires time needed to identify, code, test, flight test, and document the work performed. This work could take up to 4 to 5 years to complete. Since the software work is predicated on the Air Force obtaining equipment from vendors, we believe the Air Force’s use of alternative outlay rates based on historical information for software projects is reasonable. Concerning the prior-year orders financed with multiyear funds, Air Force headquarters officials informed us that they consulted with the OUSD (Comptroller) officials to discuss a revision to the carryover-allowance methodology in the DOD Financial Management Regulation. Based on verbal discussions with the OUSD (Comptroller) officials, Air Force officials concluded that the DOD Financial Management Regulation authorized the use of second-year outlay rates for orders funded with multiyear appropriations, such as procurement and research, development, test, and evaluation appropriations. The Air Force first applied this exemption in its fiscal year 2011 budget which increased the calculation of allowable carryover for fiscal year 2009 and therefore, decreased the amount of actual carryover that was over the allowable amount for fiscal year 2009. The Air Force applied this exemption again in its fiscal year 2012 AFWCF budget which increased the allowable amount of carryover by $125 million, $74 million, and $90 million for fiscal years 2010, 2011, and 2012, respectively. We requested the Air Force written request for this exemption and the OUSD (Comptroller) written approval. The Air Force and OUSD (Comptroller) could not provide us any documentation. The DOD Financial Management Regulation requires the Air Force to request approval for the exemption in writing from the Director for Revolving Funds, OUSD (Comptroller). Furthermore, the Air Force’s exemption for multiyear appropriations was based on a provision added to the DOD Financial Management Regulation on Army ordnance activities which perform a manufacturing function. This provision in the regulation does not pertain to the Air Force. Therefore, the Air Force decision to increase the allowable amount of carryover for orders funded with multiyear appropriations was not in accordance with the DOD Financial Management Regulation. Carryover related to external depot maintenance work increased from $1 billion at the end of fiscal year 2006 to $1.9 billion at the end of fiscal year 2010. Our analysis of ALC depot maintenance reports and discussions with Air Force officials identified four primary reasons for this increase. First, Air Force underestimated its forecasted workload requirements on the number of hours of depot maintenance work to be performed on repairing assets, such as aircraft. Second, because the Air Force believed its depot maintenance workload would decrease, the Air Force directed AFMC to reduce its workforce in November 2007. While the ALCs reduced their workforce by about 2,000 civilian personnel, the actual workload and related funding increased instead of decreased—thus resulting in personnel shortages. Third, during the 5-year period, the Air Force budget underestimated the amount of funds on new orders that would be received from customers and the work performed by the ALCs did not keep pace with the increase in funds received on new orders from year to year. Fourth, the ALCs could not obtain parts when needed to perform repair work that contributed to the growth of carryover. The Air Force is or has taken action to address these problems such as hiring personnel to perform depot maintenance work and including OCO-funded orders in the fiscal year 2012 AFWCF budget. From fiscal years 2006 through 2010, the ALCs in-house carryover increased from $1 billion to $1.9 billion on orders received from customers external to AFWCF. The carryover increased because the dollar amount of new orders exceeded the dollar amount of work performed (revenue) for every year from fiscal year 2006 through fiscal year 2010. As a result, carryover increased from 4 months of work at September 30, 2006, to 6.9 months of work at September 30, 2010. The carryover reached a high point of 7.1 months of work for fiscal year 2009. Figure 1 shows the ALCs’ new orders, revenue, and carryover for fiscal years 2006 through 2010 on orders received from customers external to AFWCF. In order for the ALCs to operate efficiently and effectively and to accomplish depot maintenance work within planned time frames that minimizes carryover, the Air Force needs to plan for several key elements. First, the Air Force needs to accurately forecast workload requirements on the number of hours of depot maintenance work to be performed repairing assets such as aircraft, engines, and missiles. Second, the ALCs need appropriate levels of facilities and support equipment available to support the forecasted workload. Third, the assets, such as aircraft, needing repair must be available at the ALC as planned, to ensure that work can begin on the assets as scheduled. Fourth, the ALCs need to have the right number of personnel with the right skill mix to perform the work. Fifth, the DOD supply system must maintain the right mix and sufficient quantities of spare parts to satisfy the projected workload. Finally, Air Force depot maintenance customers need to properly fund the work, as budgeted, to be performed. For the process to work correctly and seamlessly, these elements must occur and be properly synchronized. If the carryover becomes too high or low, this is an indication that one or more of the six elements may not be working properly. Of the six elements above, we determined the ALCs encountered problems that contributed to carryover for four of these elements including: (1) forecasting workload requirements on the number of hours of depot maintenance work to be performed, (2) determining the right number of civilian personnel to perform the depot maintenance work, (3) budgeting for new orders, and (4) obtaining parts to perform the work. Specific examples of problems experienced by ALCs contributing to carryover are provided in appendix II. Accurately forecasting workload requirements is essential for ensuring that needed facilities and support equipment, personnel, and spare parts are available to support the planned workload to keep the ALCs operating efficiently. However, the Air Force underestimated its forecasted workload requirements on the number of hours of depot maintenance work to be performed from fiscal year 2007 through fiscal year 2010, especially in fiscal year 2009. According to the Air Force’s Workload Review Guidance and AFMC officials, AFMC and ALCs evaluate their future planned workload and develop workload forecasts by converting anticipated customer funding into the number of hours required to perform the work. The Air Force develops two forecasts for a specific fiscal year. One forecast is 18 months before a fiscal year and another forecast is 6 months before the fiscal year. Figure 2 shows the Air Force’s 18- and 6-month forecast and actual depot maintenance workload requirements for fiscal years 2007 through 2010. As shown in figure 2, the Air Force anticipated in its 18-month forecast that its workload requirements would steadily decrease from 22.2 million hours in fiscal year 2007 to 20.2 million hours in fiscal year 2010—a reduction of 2 million hours. The reduction in workload requirements was included in a November 2007 Air Force memorandum. In that memorandum, the Air Force provided two reasons for the anticipated workload decrease: (1) the ALCs were more efficient due to the implementation of transformation efforts focused on improving operational performance and reducing weapon system sustainment costs that began in fiscal year 2003, and (2) approved retirements of aircraft such as the KC-135 platforms would reduce depot maintenance workload at the ALCs. In addition, Air Force headquarters officials informed us that at the beginning of fiscal year 2008, the Air Force had not seen an increase in depot maintenance work as a result of OCO. As a result, the Air Force directed AFMC to reduce its total workforce to support the forecasted workload. (Workforce reductions are discussed in the next section.) The Air Force anticipated a decrease in workload requirements in its 18- month forecast, but the actual workload requirements increased by 2.1 million hours from fiscal year 2007 to fiscal year 2010. Over the same 4- year period, carryover increased from $1.1 billion in fiscal year 2007 to $1.9 billion in fiscal year 2010. About 65 percent of the increase occurred in fiscal year 2009 (see fig. 1). Specifically, the fiscal year 2009 actual workload requirements exceeded the 18- and 6-month forecasts by 1.9 million and 1.7 million hours, respectively, due to (1) additional depot maintenance work on aircraft that were not retired as planned and (2) the 2009 actual inductions for aircraft and engines exceeding forecasted inductions. For example, the Air Force forecasted that it would induct 596 aircraft for depot maintenance work at the ALCs in fiscal year 2009, but 691 aircraft were actually inducted—an increase of 95 aircraft or 16 percent. Significant variance to forecasted workload on the number of hours of work to be performed and the effect it has on other decisions, such as determining personnel levels, has a direct effect on carryover balances. When forecasts are significantly different from results, carryover can increase significantly as was the case in fiscal year 2009. Having the right number of personnel with the right skill mix to perform depot maintenance work is essential for the ALCs to operate in an efficient and effective manner. However, the ALCs reduced their workforce in fiscal year 2008 and the first 4 months of fiscal year 2009 which caused personnel shortages and contributed to growth in carryover amounts for fiscal years 2008, 2009, and 2010. Personnel are a critical component in the ALCs’ ability to repair and maintain an aging Air Force fleet of fighters, bombers, and cargo aircraft. In a November 2007 Air Force memorandum, the Air Force stated that “while overall workload is decreasing, we are seeing manpower growth instead.” As a result, the Air Force directed AFMC to reduce its total workforce to support the forecasted workload. The following figure provides AFMC monthly civilian workforce totals for fiscal years 2006 through 2010. In the 14 months immediately following the issuance of the November 2007 memorandum, AFMC reduced its workforce by about 2,000 civilian personnel—primarily through attrition and buy-out incentives. According to AFMC and ALC officials, these personnel reductions significantly reduced the operational capabilities at the ALCs and coupled with the increase in orders led directly to increased carryover amounts from fiscal years 2008 through 2010. In the first half of fiscal year 2009, the Air Force determined that the workforce reductions were not warranted because the dollar amount of external new orders (workload) received by the ALCs increased instead of decreasing. For example, the ALCs received $3.4 billion of external new orders in fiscal year 2009—about a $285 million increase over fiscal year 2008 orders. In order to meet higher workload demands and limit the growth in fiscal years 2009 and 2010 carryover amounts, the ALCs began hiring personnel in fiscal year 2009. Most of the hiring occurred at the Oklahoma City and Warner Robins ALCs. For example, the Oklahoma City ALC increased civilian personnel from 7,073 to 8,848 in a 20-month period beginning in February 2009—a 25 percent increase. While increasing the workforce has helped the ALCs to reduce the growth in carryover, ALC officials informed us that the new personnel lacked the experience of the personnel who left in fiscal year 2008 and the first half of fiscal year 2009. As a result, the new personnel were not always as efficient and required experienced workers to train them, reducing the productivity of the existing workforce. Further, the ALCs required time to ramp up hiring and train new personnel to be certified to repair weapon systems. AFMC and ALC officials stated that the ALCs should reach their projected personnel levels in fiscal year 2011. Accurate budgets on the amount of external new orders to be received are essential for the ALCs to plan their work such as determining the right number of personnel needed. However, from fiscal year 2006 through fiscal year 2010, the Air Force consistently underestimated its new orders when developing its AFWCF budgets for work performed by ALCs on orders received from customers that were external to AFWCF. Further, for fiscal years 2009 and 2010, actual new orders exceeded budgeted orders by $242 million and $597 million, respectively—the largest differences in the 5-year period. Table 4 shows the dollar amount of actual and budgeted new orders for fiscal years 2006 through 2010. When developing its budget for new orders for fiscal year 2006 through fiscal year 2010, Air Force officials informed us they did not include orders for work financed with OCO funds. However, for fiscal years 2006 through 2010, the ALCs received $1.7 billion in work financed with OCO funds. The majority of the funded orders were received in fiscal years 2009 and 2010 when the ALCs received $1 billion in OCO-funded orders over this 5-year period. Air Force officials told us that they did not include OCO orders in the budget for two reasons: Customers’ OCO budgets were finalized and submitted later in the calendar year than the base budget. Thus, the amount of OCO orders was not fully determined when the AFWCF budget was completed and submitted. For fiscal years 2006 through 2008, the actual orders varied by about $218 million or less than the budgeted orders. Air Force officials said that there was enough flexibility with the AFWCF to perform the additional amount of work, such as having employees work overtime. While the difference between actual and budgeted orders ranged from $74 million to $218 million from fiscal years 2006 through 2008, the difference grew in fiscal years 2009 and 2010 primarily due to an increase in OCO- funded orders. To correct this problem, the Air Force began including OCO-funded orders in the fiscal year 2012 AFWCF budget. Without the DOD supply system maintaining the right mix and sufficient quantities of spare parts, the ALCs cannot complete funded workload in a timely and efficient manner. However, our analysis of Air Force data and interviews with ALC officials found that parts shortages at the ALCs have contributed to the growth of carryover. Air Force operations have grown significantly in support of OCO. These higher operational levels have resulted in increased wear on the Air Force’s aging fleet of aircraft such as the KC-135 and C-130 and engines, such as the F110-100 and F108-100, resulting in a greater demand for spare parts to repair them. When shortages of parts occur, the ALCs (1) work may be delayed until the parts are available in the supply system or are manufactured by the ALCs, potentially increasing the carryover amounts at year end, or (2) costs increase from the time-consuming efforts taken to obtain (cannibalize) parts from other aircraft or engines to continue the repair process. Our analysis of Air Force data showed that the average monthly backorders for spare parts at the ALCs have grown significantly in recent years. From fiscal years 2008 to 2010, average backorders at the ALCs grew by 44 percent. The Defense Logistics Agency and the Air Force’s Global Logistics Support Center were the ALCs’ primary supply sources for acquiring spare parts. Table 5 provides the ALCs average monthly backorders. According to ALC officials, backorders for spare parts grew because the supply system did not maintain the right mix or sufficient quantities of spare parts on hand to meet the higher-than-projected workload requirements experienced in fiscal years 2009 and 2010. For example, Oklahoma City Center officials informed us that the F108-100 engine program experienced a 60 percent increase for the overhaul of these engines from fiscal years 2008 to 2009, creating shortages of parts such as the engine mounts and compressor discharge nozzle cases. In addition, over the 3-year period the average age of backorders for spare parts grew in all age categories. Spare parts on backorder can delay work and potentially increase the carryover amounts. Table 6 provides the average monthly backorders for the three ALCs by age category. In order to perform the required repair work and to minimize the impact of parts shortages, the ALCs have used other methods to obtain needed parts such as obtaining parts from other aircraft (known as cannibalization), fabricating parts, or obtaining parts through the use of their local procurement authority. While the alternative methods allowed work to continue, obtaining the needed parts this way was inefficient. For example, if the aircraft mechanic does not receive the spare parts from the supply system, the mechanic may cannibalize parts from other aircraft. According to reports, the three ALCs cannibalized 5,189, 5,447, and 5,667 items in fiscal years 2008, 2009, and 2010, respectively. According to officials, the ALCs can cannibalize parts in the short term to resolve spare part shortages; however, in the long term, the ALCs need the supply system to obtain the needed parts to continue operations. We reported in March 2007 that the basic challenge of inventory management is having the proper amount of items on hand when required. If inventory levels are too low, DOD and its components may experience supply shortages and be unable to satisfy customer demands. If inventory levels are too high, money is invested on items that may never be used. Because of ineffective and inefficient inventory management practices and procedures, since 1990 we have identified DOD supply chain management as a high-risk area. DOD has acknowledged the longstanding problems concerning its inventory management and has actions under way to address them. With the objective of reducing the acquisition and storage of secondary item inventory that is excess to requirements, section 328 of the National required the Secretary of Defense Authorization Act for Fiscal Year 2010Defense to submit to congressional defense committees a comprehensive plan for improving the inventory management systems of the military departments and the Defense Logistics Agency. On November 8, 2010, DOD submitted its Comprehensive Inventory Management Improvement Plan to the congressional defense committees. Section 328 also requires GAO to submit to the congressional defense committees an assessment of the extent to which the plan meets the specified requirements no later than 60 days after the plan’s submission. We assessed the plan and found that DOD’s plan addressed each of the eight required elements in section 328. Section 328 also requires GAO to submit another report to the congressional defense committees not later than 18 months after DOD’s plan is submitted. The second report is to document our assessment of the extent to which the plan has been effectively implemented by each military department and by the Defense Logistics Agency. Since DOD recently issued its plan in November 2010 to improve the management of inventory and we will be assessing the implementation of the plan, we are not making any recommendations in this report on the parts shortages. However, until DOD resolves its inventory problems, the ALCs will likely continue to be affected by parts shortages or other supply chain management problems that affect their efficiency as well as the dollar amount of carryover. Reliable carryover information is essential for Congress and DOD to perform their oversight responsibilities, including reviewing and making well-informed decisions on DOD’s budget. However, the Air Force underestimated the work to be performed and the related resources needed, thereby impacting its ability to complete the work in an efficient and effective manner and causing carryover to exceed the allowable amounts in the AFWCF annual budgets. Budget estimates could be improved by implementing effective controls to properly consider and address the major factors that caused variations between budgeted and actual carryover amounts. Also, correctly interpreting and applying criteria in the DOD Financial Management Regulation for determining the allowable carryover amounts would increase the reliability of such estimates. While the carryover metric is a management tool for controlling the amount of work that can carry over from one fiscal year to the next, the metric can also be used as a tool to identify problems in other areas such as (1) developing workload requirements on the number of hours of depot maintenance work to be performed, (2) establishing personnel levels to perform the depot maintenance work, (3) developing budgets on the amount of new orders for depot maintenance work, and (4) obtaining spare parts to perform depot maintenance work. For example, in fiscal year 2009, AFWCF carryover exceeded the allowable amount by over a half a billion dollars. This was largely due to the ALCs’ reducing their personnel by about 2,000 shortly after the Air Force issued a memorandum in November 2007 directing them to do so in anticipation of workload reductions that did not materialize. The Air Force has initiated actions to improve the budgeting and management of carryover. These actions have the potential to improve the accuracy of budgeting for AFWCF carryover. However, the Air Force needs to routinely compare the budgeted carryover information with the actual results and determine the reasons for the differences and consider this information in formulating future budgets. We are making five recommendations to the Secretary of Defense to improve the budgeting and management of carryover. We recommend that the Secretary of Defense direct the Under Secretary of Defense (Comptroller) to take the following action: Clarify the existing guidance in the DOD Financial Management Regulation that allows Army ordnance activities to use multiyear appropriations in the calculation of allowable carryover to ensure that other working capital fund activities do not use this provision as a basis for their calculation of allowable carryover. We recommend that the Secretary of Defense direct the Secretary of the Air Force to take the following actions: Take actions to ensure that requests for exemption from the carryover policy are made in writing and approved by the Director for Revolving Funds as required by the DOD Financial Management Regulation. Require Air Force headquarters and Air Force Materiel Command to routinely compare budgeted carryover that is over or under the allowable amount to the actual amount to identify the differences and reasons for the differences, and consider these trends in developing future budget estimates on carryover. Require Air Force headquarters and Air Force Materiel Command to routinely compare budgeted orders to actual orders to identify the differences and reasons for the differences, and consider them in developing future years’ budget estimates on new orders to be received from customers. Require Air Force headquarters and Air Force Materiel Command to routinely compare the forecasted workload requirements on the number of hours of depot maintenance work to be performed to the actual number and consider these trends in developing future years’ depot maintenance workload requirements. DOD provided written comments on a draft of this report. In its comments, DOD concurred with the five recommendations and cited actions planned or under way to address them. For example, DOD stated that the DOD Financial Management Regulation will be updated to clarify that the intent of existing guidance is to permit Army ordnance activities to use multiyear appropriations in the calculation of allowable carryover, and that other working capital fund activities cannot use this provision without prior approval in writing from the OUSD (Comptroller) Director for Revolving Funds. DOD also stated that before DOD direction could be given, Air Force headquarters had already notified AFMC that written approval from the OUSD (Comptroller) Director for Revolving Funds is required for exemptions to the allowable carryover calculation. Further, DOD stated that Air Force headquarters has tasked AFMC to submit its analyses comparing budgeted and actual information on carryover, orders, and workload requirements on the number of hours of depot maintenance work to be performed to improve the budgeting and management of carryover in future years. DOD also stated that it is the Air Force’s intent to include the requirement to perform these analyses in its annual working capital fund budget guidance. We are sending copies of this report to the appropriate congressional committees. We are also sending copies to the Secretary of Defense; the Under Secretary of Defense (Comptroller); and the Secretary of the Air Force. The report also is available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact Asif A. Khan at (202) 512-9095 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To determine the extent to which (1) budget information on Air Force depot maintenance carryover for fiscal years 2006 through 2010 approximated actual results and, if not, any needed actions the Air Force is taking to improve budgeting for carryover, and (2) the Air Force depot maintenance actual carryover exceeded the allowable amount of carryover from fiscal years 2006 through 2010 and any adjustments were made to the allowable amount, we obtained and analyzed Air Force depot maintenance reports that contained information on budgeted and actual carryover and the allowable amount of carryover for fiscal years 2006 through 2010. We met with responsible officials from Air Force headquarters, Air Force Materiel Command (AFMC), and the Air Logistics Centers (ALC) to determine the reasons for significant variances between budgeted and actual carryover or actual carryover and the allowable amount. We also met with these officials to discuss the actions the Air Force was taking to improve budgeting and management of carryover. Further, we identified and analyzed any adjustments made by the Air Force that increased the allowable carryover amounts for fiscal years 2009 and 2010. We discussed the adjustments with Office of the Under Secretary of Defense (Comptroller) and Air Force headquarters officials to obtain their explanations for making the adjustments and reviewed requirements contained in the DOD Financial Management Regulation for making adjustments to the carryover policy. To determine the extent to which there was growth in carryover at the Air Force depot maintenance in-house activities on orders received from customers that were external to Air Force Working Capital Fund (AFWCF) and the reasons for the growth, we met with responsible officials from the three ALCs, AFMC, and Air Force headquarters. Based on those discussions, we obtained information that affected carryover. First, we analyzed planned versus actual workload requirement information to determine if the Air Force developed reliable forecasted workload requirements. When differences occurred between planned and actual requirements, we met with Air Force headquarters officials to determine the reasons for the differences. Second, we analyzed reports that provided information on personnel levels at the ALCs to determine if they had reduced their workforce. We met with officials at the three ALCs, AFMC, and Air Force headquarters to discuss the reduction of personnel at the ALCs as well as the subsequent hiring and training of personnel. Third, we analyzed budgeted and actual new orders from fiscal years 2006 through 2010 to determine if the Air Force underestimated the ALCs budgeted orders. When differences occurred between budgeted and actual new orders, we met with Air Force headquarters officials to determine the reasons for these differences. Fourth, we analyzed information on the ALCs ability to obtain spare parts to perform work to determine if parts shortages contributed to carryover. We met with AFMC and ALC officials to discuss parts shortages and what actions the ALCs could take to alleviate the shortages. Fifth, we identified all high-dollar carryover orders received by the ALCs in fiscal years 2009 and 2010 to determine the reasons for the carryover. We focused on these orders because (1) carryover exceeded the allowable amount by over a half a billion dollars in fiscal year 2009 and (2) fiscal year 2010 orders were the most recent orders at the time of our audit. Financial information in this report was obtained from official Air Force budget documents and accounting reports. To assess the reliability of the data, we (1) reviewed and analyzed the factors used in calculating carryover for the completeness of the elements included in the calculation, (2) interviewed Air Force officials knowledgeable about the carryover data, (3) reviewed GAO reports on depot maintenance activities, and (4) reviewed orders customers submitted to the depots to determine whether they were adequately supported by documentation. In reviewing these orders, we obtained the status of the carryover at the end of the fiscal year. On the basis of procedures performed, we have concluded that these data were sufficiently reliable for the purposes of this report. We performed our work at the headquarters of the Office of the Under Secretary of Defense (Comptroller) and the Office of the Secretary of the Air Force, Washington, D.C.; Air Force Materiel Command, Wright- Patterson Air Force Base, Ohio; the depot maintenance wing at the Oklahoma City Air Logistics Center, Tinker Air Force Base, Oklahoma; the depot maintenance wing at the Ogden Air Logistics Center, Hill Air Force Base, Utah; and the depot maintenance wing at the Warner Robins Air Logistics Center, Robins Air Force Base, Georgia. We conducted this performance audit from July 2010 through July 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix contains specific examples showing those problems experienced by the Air Logistics Centers (ALC) in performing depot maintenance work that contributed to work carrying over from one fiscal year to the next. These problems include (1) the lack of personnel, (2) difficulties encountered in obtaining parts from the Department of Defense supply system, and (3) changing or increasing workload requirements. Most of the examples discussed below include two or three of the problems cited above. The Oklahoma City ALC repairs Air Force F110-100 engines used on the F- 16 Fighting Falcon aircraft. Beginning in fiscal year 2008, the Air Force began experiencing delays in the engine program due to personnel and parts shortages that resulted in higher carryover in fiscal years 2009 and 2010. These personnel and parts shortages resulted in the average number of days necessary to complete an engine from the date the engine was inducted (flow days) increasing from 135 days at the end of fiscal year 2008 to 371 days at the end of fiscal year 2010—a 175 percent increase. The personnel and parts shortages are discussed below. ALC officials told us that personnel shortages occurred because 7 of their 14 experienced mechanics were transferred to another engine repair line beginning in fiscal year 2008 even though orders for repairing the engine did not decline. The ALC transferred the mechanics because (1) the serviceable engines in Air Force’s worldwide inventory exceeded its wartime requirements and (2) there was an urgent need for the mechanics on another engine repair line. ALC officials also told us that work on the engines was delayed because parts were not always available in the supply system. At the end of fiscal years 2009 and 2010, program office officials estimated that there were about 129 and 137 backorders for parts, respectively. For example, the production unit could not obtain enough service life extension packages to overhaul the engines. According to officials and documentation, another delay occurred when some of the engines’ front stator assemblies were identified as having excessive wear—a new failure mode. The ALC could not repair the assemblies because it did not have a certified process for repairing the parts. Thus, the ALC negotiated and awarded a contract to a vendor. The process to competitively award the contract and have the parts repaired by the vendor created delays in the program during fiscal years 2009 and 2010. ALC personnel now have a certified process for repairing the parts. Due to the personnel and parts shortages, ALC officials stated that they did not complete work on their fiscal year 2009 orders and work did not start on their fiscal year 2010 orders as of November 2010. As a result, carryover was higher than planned at the end of fiscal years 2009 and 2010. Specifically, the ALC planned to carry over $56 million on 19 engines into fiscal year 2010. Instead, it carried over $120 million on 43 engines into fiscal year 2010—about $64 million and 24 engines more than planned. The personnel and parts shortages continued on these engines into fiscal year 2010. The ALC planned to carry over $21 million on 7 engines into fiscal year 2011. Instead, it carried over $81 million on 29 engines into fiscal year 2010—$60 million and 22 engines more than planned. Beginning in fiscal year 2009, the requirement for repairing F108-100 engines used on the KC-135 refueling aircraft grew significantly because the Air Force did not have enough serviceable engines to satisfy its wartime requirement. Thus, the Oklahoma City ALC began expanding its production capacity to produce upwards of 120 engines annually—more than doubling the 53 engines produced in fiscal year 2008. To perform the additional workload, the ALC transferred 7 mechanics from another engine repair line and hired an additional 30 mechanics in fiscal years 2009 and 2010. According to our analysis of data and discussions with F108-100 program and production officials, the increased requirement created significant parts shortages in fiscal year 2010 because the demand for parts needed to repair these engines exceeded the availability of inventory in some cases. For example, due to a lack of high pressure compressors and fan booster assemblies, work on several engines stopped periodically until replacement parts were obtained. Data showed that the engine program had 160 backorders at the end of fiscal year 2010—almost doubling 81 backorders at the end of the previous year. Production of the engine dropped from 85 in fiscal year 2009 to 67 in fiscal year 2010 primarily due to the parts shortages according to Oklahoma City ALC officials even though they planned to produce 90 engines in fiscal year 2010. The ALC planned to carry over $11 million on 5 engines into fiscal year 2011. Instead, it carried over $78 million on 37 engines into fiscal year 2011—$67 million more than planned. Prior to fiscal year 2006, Oklahoma City ALC officials stated that the Air Force maintained a B-52 fleet size of 93 aircraft. To maintain the B-52s, it inducted and performed depot maintenance work on about 21 or 22 aircraft annually and retained a workforce of almost 480 personnel to perform the work. According to a fiscal year 2006 budget document, the Air Force planned to reduce its fleet size to 56 aircraft in order to transform its “total force” into a smaller, more lethal and agile force by eliminating the most expensive, least effective systems. By January 2007, the Air Force reduced its planned funding for depot-level repairs and maintenance of B-52 aircraft to 13 annually. Moreover, the B-52 workforce was reduced to just over 300 personnel—a reduction of about 180. The National Defense Authorization Act for Fiscal Year 2008 required the Air Force to retain a larger fleet size of B-52 aircraft than previously required. According to production officials, when the Air Force increased its targeted fleet size from 56 to 76 to comply with congressional direction, the ALC had to increase its workforce to satisfy a higher production requirement of 17 aircraft annually. The workforce shortage, according to these officials, created a backlog of work in the B-52 program that contributed to (1) the average number of days to complete an aircraft increasing by 76 days between fiscal years 2008 and 2010 from 227 to 303 and (2) $73.3 million of the $75.6 million of orders received in the last 4 months in fiscal year 2009 carried over from fiscal year 2009 into fiscal year 2010. The ALC increased its workforce to 492 personnel in fiscal year 2010 to handle the additional workload. Because of increasing requirements in the C-5 aircraft program, the Warner Robins ALC encountered problems with a lack of parts and personnel to perform the work. The C-5 program fiscal year 2010 requirements increased from 677,103 hours to 1,046,434 hours, or an increase of 369,331 hours from the initial budget. However, because of a previous hiring freeze, the C-5 program was understaffed by 145 employees, or about 27 percent of its planned direct labor workforce at the beginning of fiscal year 2010. Officials informed us that it takes about 2 years before a new hire becomes highly productive. As a result of the lack of parts and personnel associated with increased requirements, the average flow days increased from 286 days in fiscal year 2009 to 340 days in fiscal year 2010. The following example illustrates how the work was affected by a lack of parts and personnel due to increasing requirements. In April 2009, the ALC accepted a $20.4 million order that was financed with fiscal year 2009 Air Force Reserve operation and maintenance appropriated funds to perform depot maintenance on one C-5 aircraft. Because the aircraft was inducted on September 30, 2009, the entire $20.4 million carried over into fiscal year 2010. According to ALC officials, aging of the aircraft increased labor and parts requirements, which affected the ALCs ability to perform the depot maintenance work. This further contributed to the carryover problem and resulted in $2.9 million being carried over into fiscal year 2011. For this C-5 aircraft, labor requirements increased from 47,965 hours to 58,274 hours, or an increase of 10,309 hours, to perform the depot maintenance work. Our review of documentation found that there were five backorders of parts and material associated with the depot maintenance work on the C-5 aircraft. In addition, in order to perform the required depot maintenance work and help minimize the impact of part shortages on the C-5 program, the ALC either obtained parts from other aircraft (cannibalized) to use on this aircraft or removed parts from this aircraft to use on other aircraft. Documentation showed that a total of 94 parts were either obtained from other aircraft or removed from this aircraft to alleviate part shortages. Because of increasing requirements in the C-130 program, the Warner Robins ALC encountered problems with a lack of parts and personnel to perform the work. The C-130 program fiscal year 2010 requirements increased from 1,277,855 hours to 1,324,476 hours, or an increase of 46,621 hours from the initial budget. However, because of a previous hiring freeze, the C-130 program was understaffed by 186 employees, or about 22 percent of its planned direct labor workforce at the beginning of fiscal year 2010. Officials informed us that it takes about 2 years before a new hire becomes highly productive. The following example illustrates how the work was affected by a lack of parts and personnel due to increasing requirements. In June 2009, the ALC accepted a $4.8 million order that was financed with fiscal year 2009 Air Force operation and maintenance appropriated funds to perform depot maintenance work on one C-130 aircraft. According to officials, increased requirements in the C-130 program required the ALC to use more labor and parts than planned to perform the depot maintenance work. As a result, the ALC carried over $3.9 million into fiscal year 2010. For this C-130 aircraft, labor requirements increased from 27,959 hours to 30,405 hours, or an increase of 2,446 hours, to perform the depot maintenance work. In addition, in order to perform the required depot maintenance work and help minimize the impact of part shortages on the C-130 program, the ALC either obtained parts from other aircraft to use on this aircraft or removed parts from this aircraft to use on other aircraft. Documentation showed that a total of 31 parts were either obtained from other aircraft or removed from this aircraft to alleviate part shortages. In fiscal year 2010, the Ogden ALC performed depot maintenance work on Air Force A-10 aircraft to extend its service life. According to Ogden ALC officials and documentation on the A-10 service life extension program, the A-10 aircraft was originally designed to fly approximately 8,000 hours and be replaced by a newer, more modern aircraft. The aircraft was originally expected to fly through fiscal year 2005; however, the Air Force decided to extend the aircraft’s service life to fiscal year 2028 due to its unique mission capabilities. This decision required the aircraft to undergo a major overhaul including its wings, fuselage, and fuel cells. According to A-10 officials, the lack of a sufficient number of serviceable aircraft wings in Air Force supply created significant program delays in fiscal year 2010 that increased the ALCs carryover above plan. The officials informed us they planned to complete work on A-10 aircraft, on average, in about 180 days in fiscal year 2010; however, maintenance on the wings alone took, on average, about 220 days. The ALC planned to carry over $53 million into fiscal year 2011. Instead, it carried over $64 million—$11 million more than planned. In addition to the contact named above, Greg Pugnetti, Assistant Director; Steve Donahue; Keith McDaniel; and Hal Santarelli made key contributions to this report.
Three Air Force depots support combat readiness by providing repair services to keep Air Force units operating worldwide. To the extent that the depots do not complete work at year end, the work and related funding will be carried into the next fiscal year. Carryover is the reported dollar value of work that has been ordered and funded by customers but not completed at the end of the fiscal year. GAO was asked to determine the extent to which: (1) budget information on depot maintenance carryover approximated actual results from fiscal years 2006 through 2010 and, if not, any needed actions to improve budgeting for carryover; (2) depot maintenance carryover exceeded the allowable amount and any adjustments were made to the allowable amount; and (3) there was growth in carryover at the depots and the reasons for the growth. To address these objectives, GAO (1) reviewed relevant carryover guidance, (2) obtained and analyzed reported carryover and related data at the Air Logistics Centers (ALC), and (3) interviewed DOD and Air Force officials. The Air Force consistently underestimated the dollar amount of carryover that would exceed the allowable amount in the Air Force Working Capital Fund (AFWCF) budgets from fiscal years 2006 through 2010. In 3 of the 5 years, the budgeted carryover amount underestimated the actual amount by over $250 million. The budget information on carryover is critical since decision makers use this information when reviewing the AFWCF budgets. The Air Force began implementing actions to improve budgeting for AFWCF such as including overseas contingency operations funded orders in the AFWCF fiscal year 2012 budget. These actions have the potential to improve the accuracy of budgeting for AFWCF, but their success will only be known when budgeted carryover information is compared to actual results. GAO analysis of AFWCF reports showed that in each year actual carryover exceeded the allowable amount from fiscal years 2006 through 2010. The allowable amount of carryover is based on the amount of new orders received and the outlay rate of customers' appropriations financing the work. The amount of carryover that exceeded the allowable ranged from $4 million to $568 million. Further, the Air Force increased the allowable amount for orders funded with multiyear appropriations by $115 million and $125 million in fiscal years 2009 and 2010, respectively. Without this adjustment, the AFWCF would have exceeded the allowable carryover by corresponding amounts. The DOD regulation on orders funded with multiyear appropriations only pertains to Army ordnance activities that perform a manufacturing function. Therefore, the provision on increasing the allowable amount of carryover for orders funded with multiyear appropriations does not apply to the Air Force. GAO analysis of ALC reports and discussions with Air Force officials identified four reasons for the increase in carryover from $1 billion at the end of fiscal year 2006 to $1.9 billion--nearly 7 months of work--at the end of fiscal year 2010 on depot maintenance work. First, Air Force underestimated its forecasted workload requirements on the number of hours of work to be performed. Second, because the Air Force believed its depot maintenance workload would decrease, it reduced its workforce in November 2007. While the ALCs reduced their workforce by about 2,000 civilian personnel, the actual workload increased instead of decreased--thus resulting in personnel shortages. Third, the Air Force budget underestimated the amount of funds on new orders received from customers, and the work performed by the ALCs did not keep pace with the increase in funding on new orders from year to year. Fourth, the ALCs could not obtain parts when needed to perform repair work that contributed to the growth of carryover. Air Force data showed that the average monthly outstanding backorders for spare parts at the ALCs grew by about 44 percent from fiscal year 2008 to fiscal year 2010. The Air Force is taking action to address these problems but still needs to compare budgeted to actual information, such as the number of hours of work to be performed, and identify the reasons for the differences. GAO makes five recommendations to DOD to improve the budgeting and management of carryover, such as comparing budgeted to actual information on carryover and clarifying DOD guidance on allowable carryover funded with multiyear appropriations. DOD concurred with GAO's recommendations and has actions planned or under way to implement them.
The Air Force’s F-22A Raptor is the only operational fifth-generation tactical aircraft, incorporating a low observable (stealth) and highly maneuverable airframe, advanced integrated avionics, and a supercruise engine capable of sustained supersonic flight. The F-22A acquisition program began in 1991 with an intended development period of 12 years and a planned quantity of 648 aircraft. The system development and demonstration period eventually spanned 14 years, during which time threats, missions, and some requirements changed. In particular, the F-22A was originally designed to fly primarily air-to-air missions; however, since that time the Air Force has decided to add air-to-ground capabilities to the F-22A. Development costs substantially increased and total quantities were eventually decreased to 188 aircraft. When the final aircraft is delivered in May 2012, the F-22A acquisition program will be complete at a cost of $67.3 billion. In 2003, the Air Force established a modernization program to develop and insert new and enhanced capabilities considered necessary to meet the threat. According to Air Force officials, modernization is defined as a process of upgrading and modifying aircraft with a focus on adding new capabilities. The modernization is now proceeding in four related increments, each with multiple projects: the initial phase of modernization, addressed some Increment 2,requirements deferred from the acquisition program and added some new ground attack capability. It has been fielded. Increment 3.1 began fielding in November 2011 and adds enhanced radar and enhanced air-to-ground attack capabilities. Increment 3.2A is a software upgrade to increase the F-22A’s electronic protection, combat identification, and Link-16 communications and data link capabilities. Increment 3.2B will increase the F-22A’s electronic protection, geo- location, and Intra Flight Data Link (IFDL) capabilities, and adds AIM- 9X and AIM-120D missiles. In addition to these efforts, in 2006, the Air Force began a Reliability and Maintainability Maturation Program (RAMMP). Although the Air Force does not consider this part of the modernization program, it is integral to making the F-22A weapon system more available, reliable, and maintainable. Since the F-22A’s initial fielding in 2006, maintenance issues have prevented it from achieving reliability and availability requirements, and fleet operating and support (O&S) costs are much higher than projected earlier in the program. Total projected cost of the F-22A modernization program has more than doubled since it started. While the program has completed and fielded some of its planned capabilities, the overall schedule to complete integration and testing of planned capabilities and deliver them to the warfighter has slipped by nearly 7 years. The content, scope, and phasing of planned capabilities also shifted over time with changes in requirements, priorities, and annual funding decisions. Visibility and oversight of the program’s cost and schedule is hampered by a management structure that does not directly track and account for the full cost of specific capability increments. The Air Force plans to separately break out and manage the fourth increment as a major defense acquisition program, which should improve management and oversight. The Air Force is now expected to spend around $11.7 billion to modernize and improve the reliability of the F-22A, compared with the $5.4 billion projected soon after the start of development. Officials underestimated the scope of the total program and the time and money that would eventually be needed to develop and field new capabilities. Contributing factors to this cost growth include (1) changed and added requirements; (2) unexpected expenses for building a support infrastructure; and (3) unplanned efforts to improve aircraft reliability and maintainability. Program officials also said that instability in modernization funding contributed to some of the cost growth by stretching the time required to complete projects. Figure 1 shows increased cost estimates over time for the modernization program and other related costs. Modernization increments include development and procurement costs directly tied to one of the four increments for acquiring upgraded capabilities. These include infrastructure costs for lab support, test operations, program management, retrofit to bring all aircraft to a common configuration, and other efforts integral to supporting modernization increments. Other improvement costs principally include the RAMMP reliability and maintainability projects and making structural repairs needed for the aircraft to achieve its required 8,000 hour service life. At this point, an estimated $5.5 billion of the $11.7 billon has been spent. A future investment of around $6.2 billion remains: $1.3 billion for Increment 3.2B, $3.6 billion for other modernization and support activities, and $1.3 billion for completing the RAMMP and structural repairs. When the F-22A modernization development program began, the Air Force expected to have all current planned capabilities integrated and fielding started by 2010. Now, the final increment is not expected to begin fielding until 2017, 7 years later than initially planned. Air Force officials stated that they underestimated the sheer magnitude of the modernization effort, both in the amount of time required to develop and integrate the capability, and costs to complete the modernization. According to program officials, contributing factors to delays include (1) additional requirements, (2) unexpected problems and delays during testing, and (3) research, development, testing, and evaluation funding fluctuations. Figure 2 compares the initial and latest schedules. According to Air Force officials, the program currently intends to upgrade 143 aircraft with the full complement of modernized capabilities by fiscal year 2020 and retain 36 aircraft with only Increment 2 capabilities to be used in training. Increment 3.1 is being fielded in fiscal years 2011 to 2016 and Increment 3.2A from fiscal years 2014 to 2016. Increment 3.2B, the last currently planned increment, is expected to field from fiscal years 2017 and 2020. Future capability enhancements are expected to follow the current modernization program, but have not been defined. The content, scope, and phasing plan changed over time, contributing to cost and schedule problems. Figure 3 illustrates the changing nature of modernization projects, particularly in the later increments. Some capabilities, such as the Multifunction Advanced Data Link, have been eliminated because of changes in requirements and immature technology. Some, like the AIM-9X missile, have been added to the program to meet emerging threats. Some required capabilities have been reduced, such as the Geolocate project, which will now field a less- capable version than initially planned. Air Force officials stated that potential new capabilities are analyzed and vetted by evaluating technical maturity and applying cost as independent variable principles to determine which to include in the F-22A modernization program. As a result of this evaluation process, certain capabilities have been modified, deferred, added, or eliminated. Most changes affect the final two increments. For example, MADL, which was intended to provide communications interoperability with the F-35 Joint Strike Fighter, was removed from Increment 3.2B. MADL and other deferred efforts, such as the full Small Diameter Bomb capability, may eventually be delivered in future increments yet to be defined. Tracking and accounting for the full and accurate cost of each modernization increment, and individual projects within each increment, are limited by the way the modernization program is structured, funded, and executed. As depicted in figure 4, only 26 percent of total projected costs can be traced directly to the four modernization increments. About 57 percent of total costs go to fund activities that support all the modernization efforts and the overall F-22A program but are not charged to specific increments. These activities include test operations, the building and use of government labs, management activities, retrofit efforts to bring the fleet to a common configuration, and other infrastructure accounts. The remaining 17 percent funds the RAMMP program and structural repairs. While Air Force officials do not consider these efforts as part of the funded modernization program, we note that these efforts are needed to improve fleet affordability and achieve desired aircraft life and thus integral to justify future modernization investments. Program accountability and oversight have been hampered by how the modernization program was established, managed, and funded. As we reported in March 2005, the Air Force embarked on the modernization program without a knowledge-based business case to support the multibillion dollar investment to significantly change the aircraft’s capabilities and missions.should have been established as an entirely separate acquisition program with a new business case because of the magnitude of the proposed changes. A sound business case would have matched requirements with resources—proven technologies, sufficient engineering capabilities, time, We stated that the modernization program and funding—when undertaking new product development.information about the schedule and funding was not adequately known at the start of modernization. Rather than making the new business case to justify and manage the modernization program as a separate major defense acquisition, Air Force officials incorporated it within the existing F-22A acquisition program and comingled funds. Their rationale was their belief that breaking modernization efforts out as a separate program would have delayed the capability. As a result, development funding and infrastructure expenses were added to the existing acquisition program’s baseline. The modernization program proceeded without establishing its own set of acquisition milestones and has not been subject to the same level of scrutiny by senior defense leaders or the performance reporting required of major defense acquisition programs as provided for in DOD acquisition policy. At their discretion, DOD chose to execute it within the baseline F-22A program. In November 2004, defense leaders recognized that the size and importance of the modernization program warranted a higher level of scrutiny. The acting Under Secretary of Defense for Acquisition, Technology and Logistics directed the Air Force to hold separate milestone reviews for the future stages of the modernization program to be consistent with DOD acquisition policy. Under this Air Force direction, the current modernization projects would not require formal milestones, but Office of the Secretary of Defense (OSD) oversight would be provided by periodic reviews. In 2007, OSD directed the Air Force to update the F-22A Acquisition Program Baseline to reflect the approved Increments 3.1 and 3.2; however, the Air Force believed that an acquisition strategy report rather than a baseline would provide better insight into funding and schedule details for the modernization increments. Not separating the modernization program from the F-22A program baseline was consistent with how the Air Force had handled modernization programs for prior aircraft. However, had the Air Force initiated the program under existing guidelines established by DOD Instruction 5000.02 for managing and implementing major acquisition programs, oversight of the program would have benefitted. Under these guidelines, programs are required to have an approved minimum set of Key Performance Parameters, included in the Capability Development Document; an approved Acquisition Strategy; Acquisition Program Baseline; an Analysis of Alternatives; and an Independent Cost Estimate, for a Milestone B decision that would allow them to proceed into the Engineering and Manufacturing Development phase. OSD recently reiterated its requirement for the F-22A to be consistent with DOD policy, and in December 2011, OSD directed the Air Force to establish increment 3.2B as a separate major defense acquisition program. According to the Air Force, this increment is expected to cost around $1.5 billion. Given the significant slips in schedule experienced by increments 3.1 and 3.2A, the decision to separately oversee increment 3.2B is a late but positive change. Increment 3.2B will be reported as its own major program with system development starting in fiscal year 2013. This should improve management, cost visibility, and program oversight. Air Force officials told us that they expect to manage and report all future F-22A modernization programs as separate acquisitions, starting with Increment 3.2B. Testing how well new capabilities perform is ongoing; results to date have been satisfactory but development and operational testing of the largest and most challenging sets of capabilities have not yet begun. Going forward, major challenges will be developing, integrating, and testing new hardware and software to counter emerging future threats. Other risks are associated with availability of unique test assets, greater reliance on laboratory ground tests, and relocation of a key F-22A lab that is needed to help support testing of software for the new capabilities. Parallel efforts to improve F-22A reliability and maintainability are critical to ensure life- cycle sustainment of the fleet is affordable and to justify future modernization investments. New F-22A capabilities delivered by the modernization program will be demonstrated through follow-on operational testing and evaluation to assess the upgraded F-22A’s effectiveness and suitability. Testing on the first two increments successfully demonstrated new air-to-ground capabilities. Testing of the third and fourth increments has not begun and several technical risks remain for these new capabilities. Successful mitigation of these risks is critical to keeping F-22A’s planned upgrades on schedule and within planned costs. Table 1 shows the current status of F-22A modernization operational testing for each increment. Follow-on operational testing and evaluation for F-22A fighters incorporating Increment 2 capabilities, including assessments of expanded air-to-ground capability and improvements in system suitability, were successfully completed in August 2007. The F-22A’s configured with Increment 2 capabilities were found to be operationally effective in suppressing and destroying fixed enemy air defenses, and also demonstrated successful fixes of deficiencies and weapons integration problems that had caused problems in previous testing. Flight testing demonstrated the ability to employ the Joint Direct Attack Munition (JDAM) at supersonic speeds in a high- threat anti-access environment where stealth capabilities are needed. Without this capability, baseline aircraft were only able to launch JDAMs at fixed targets in lower threat environments and at slower speeds while using target coordinates from ground spotters. Increment 3.1 further enhances F-22A’s air-to-ground capability by allowing the aircraft to find and locate ground targets with on-board systems, rather than relying on external personnel and platforms for targeting. Increment 3.1 completed follow-on operational testing in November 2011; a significant delay of 4 years from the original plan due to shortcomings identified with the baseline and upgraded aircraft. In 2009 and 2010, the Director of Operational Test and Evaluation (DOT&E) reported significant stealth-related maintenance issues that lowered operational availability and mission capability rates. F-22A program officials identified technical issues in upgrading radar, navigation, and software that needed to be addressed to meet operational testing requirements. The Air Force began Increment 3.1 operational testing in January 2011, but soon encountered flight delays that persisted from March to September 2011. The entire F-22A fleet was ordered to stand-down due to potential problems with the aircraft’s oxygen generation system. Unavailability of the test range and technical problems with ground support equipment also contributed to the lengthy flight delay. The Air Force completed flight testing for Increment 3.1 in November 2011 and expects to release the operational test report in late March 2012. In its 2011 annual report, DOT&E did not identify any significant remaining issues since flights had resumed. DOT&E also approved reducing trials from 16 to 8 and decreasing simulator test trials from 96 to 64. According to program officials, hardware and software issues had been identified and fixed as testing progressed and test pilots provided very positive feedback on Increment 3.1’s enhancements. Increment 3.2A development began in November 2011 after significant delays. This increment involves updating software to enhance electronic protection and combat identification capabilities, so that F-22A can handle new threats expected in the future. Developmental testing for this increment is expected to start in 2012 and be completed in late 2013. Operational testing and evaluation will follow and is planned to conclude in 2014. Program officials assessed the Increment 3.2A schedule as having moderate risk. Test aircraft have been operating much longer than planned and were to be replaced by new production aircraft; however, this has not happened due to the substantial reduction in the size of the F-22A fleet. Other risks appear lower. For example, some software for electronic protection and combat identification capabilities has already been developed for the F-35 Joint Strike Fighter. Also, while the Link-16 upgrade will involve a significant amount of development work, program officials consider it to be moderate risk. Increment 3.2B is scheduled to begin Engineering and Manufacturing Development in December 2012 and the decision to enter into production is scheduled for January 2016. Key capability upgrades include integrating the AIM-9X and AIM-120D missiles on the F-22A and upgrading geolocation and electronic protection subsystems. Early requirements analysis determined that AIM-9X integration may be more difficult and take longer than expected and officials have already begun risk reduction efforts. Overall, software integration is considered to have the highest risk for Increment 3.2B projects, while hardware development is rated as a moderate risk. Program officials believe that the full range of capabilities added in the modernization program can be accommodated within the weight and space limitations of the F-22A aircraft, but this will be a critical consideration in any future modernization plans. The Air Force is seeking ways to reduce the costs of Increments 3.2A and 3.2B by streamlining program activities. Officials want to make more use of developmental tests to also satisfy operational test requirements, allowing the program to identify errors for correction earlier and reducing overall costs by eliminating redundant tests. The program also intends to increase its use of F-22A ground laboratories to substitute for more expensive flight tests. The F-22A lab infrastructure is an extensive, distributed system of dedicated labs that integrate and certify flight software releases to the field and support F-22A modernization, production and sustainment activities. However, there are technical risks if lab tests do not fully replicate the performance of actual F-22A aircraft in intended environments. Officials are also expecting to save money by relocating the Raptor Avionics Integration Lab—a critical work site that stimulates sensors for targeting—from Marietta, Georgia, to Ogden Air Logistics Center, Utah by the summer of 2012. Program officials acknowledge there are some risks in this. For example, unique equipment could be damaged during the move and experienced lab staff could decide to leave the F-22A program rather than relocate. In addition to capability upgrades, the F-22A budget also funds efforts to address reliability and maintainability deficiencies that have increased support costs and have prevented the F-22A from meeting a key performance requirement. RAMMP is to develop and implement enhancements to increase aircraft availability, make maintenance faster and less costly, and reduce total life-cycle operating and support costs and cost per flying hour. While RAMMP is expected to reduce life-cycle costs over the long term, up-front investments to help realize future cost reductions have increased. The program had planned to spend about $258 million between 2005 and 2011, but actual investments through 2011 were about $528 million. The total RAMMP funding requirement through the year 2023 is now estimated at almost $1.3 billion. Air Force officials attributed part of RAMMP’s increased costs to additional projects and increased labor hours to address corrosion. Keeping the F-22A fleet affordable and meeting required performance measures is critical to sustaining fleet operations over the long term, and ensuring it is available in sufficient numbers for required missions. Projected operational and support costs are much higher than earlier estimates. For example, a 2007 independent estimate by the Air Force Cost Analysis Agency projected a $49,549 cost per flying hour in 2015 (by which time the F-22A was expected to reach full maturity), more than double the $23,282 cost per flight hour estimated in 2005. Air Force officials gave various reasons for sustainment cost increases including (1) unrealized savings from the F-22A’s performance-based logistics contract (2) fixed costs that had to be spread over a smaller number of aircraft; and (3) higher than expected costs to refurbish or replace broken parts, including diminishing manufacturing sources. However, the one common contributing factor—and the most impactful— is the cost and complexity of maintaining stealth characteristics and restoring aircraft to the required stealth level after flight operations and maintenance. Our recent report found that the number of maintenance personnel required to maintain the F-22A’s specialized stealth exterior has increased, posing a continuing support challenge for this aircraft. This has important implications for the affordability and life-cycle cost estimates for the F-35 Joint Strike Fighter. When it started in 2006, a major goal of RAMMP was to improve F-22A reliability to meet its key performance requirement by the time the fleet reached maturity at 100,000 total flight hours. This performance indicator, known as mean time between maintenance (MTBM), required aircraft in the F-22A fleet to fly an average of 3 hours between maintenance events, excluding routine servicing and inspections. This performance standard was a key performance requirement in the F-22A acquisition contract, but the fleet has never been able to meet that requirement. Currently, the MTBM achieved by the operational test aircraft with improvements is 2.47 hours. In April 2011, the Joint Requirements Oversight Council approved changing the main reliability metric from MTBM to another performance indicator, known as material availability. Officials believed the MTBM indicator was hard to define and measure, was unrealistic, and did not accurately reflect the fleet’s readiness to perform missions. Material availability is defined as the percentage of the fleet available to perform assigned missions at any given time. This standard calls for the F-22A fleet to achieve increasing levels of availability between 2011 and 2015 toward the final goal of 70.6 percent. Last year, the F-22A fleet achieved a 55.5 percent materiel availability rate. Stealth-related maintenance, system component reliability problems, and lack of spare engines were factors contributing to the fleet not achieving the goal. However, program officials expect the F-22A fleet to achieve the final availability goal by 2015 after the full fielding of reliability improvements. The Air Force reported that operational test on aircraft integrated with the current reliability improvements have achieved 78 percent availability; they anticipate significant gains by the overall fleet once reliability improvements are installed on all F-22A aircraft. Keeping the F-22A as the world’s most advanced stealth fighter requires the Air Force to counter changing threats, as well as ensure the F-22A fleet is affordable, reliable, and sustainable. In response to changing threats, officials began a Modernization Program to add new missions and capabilities while fixing problems and deficiencies that were carried over from the original development program. However, the F-22A modernization program has not had the management rigor or oversight on par with the $11.7 billion investment it entails. The program was not well-defined when it began in 2003, has had fluid scope and cost, and has been challenging from an oversight perspective as it was blended into the baseline F-22A program rather than being managed separately. As early as 2004, OSD began discussing the need to manage future modernization increments as separate acquisition programs. While modernization has been underway, the Air Force has found it necessary to invest in improved reliability and availability of the F-22A through the RAMMP program. The original reliability requirement was not met and has since been changed to another indicator. Meanwhile, O&S costs have been significantly higher than planned, with maintenance of the aircraft’s stealth levels being particularly demanding. The lessons learned on the maintenance of the stealthy F-22A may have implications for the F-35 Joint Strike Fighter. Splitting out increment 3.2B as a separate major acquisition defense program indicates that OSD is reasserting its role in the F-22A program. This is beneficial for oversight in light of the significant decisions and investments yet to come for the program. Increment 3.2B requires around $1.3 billion, while completing the RAMMP program, ongoing modernization projects, and other improvements will require an estimated $4.9 billion—a total future investment of around $6.2 billion. The program is highly dependent on a single contractor, whose responsibilities encompass managing the development and production of the F-22A; development, production, and retrofit of modernization; execution of the RAMMP program; and life-cycle support of the F-22A fleet, including supply and maintenance. Finally, the Air Force informed us that it expects to manage future modernization increments as separate acquisitions. However, given the approach the Air Force has taken to date on this and other modernization programs, there is little assurance that this will occur without specific OSD direction. As new and enhanced capabilities are proposed and vetted beyond Increment 3.2B in the F-22A modernization program, we recommend that the Under Secretary for Acquisition, Technology and Logistics evaluate those capabilities in accordance with DOD policy and statutory criteria to determine if they should be established as separate major defense acquisition programs, each with its own milestones, business case, and cost baseline that includes all applicable direct and indirect support costs required to complete the program. DOD provided us written comments on a draft of this report. The comments appear in appendix II. DOD also provided technical comments that were incorporated as appropriate. During the agency comment period, DOD requested clarification regarding our recommendation. As a result, we revised the recommendation to more clearly state that the Under Secretary of Acquisition, Technology and Logistics will evaluate future planned F-22A modernization capabilities to determine if those meeting DOD policy and statutory criteria should be established as a separate major acquisition program. DOD concurred with the revised recommendation. We are sending copies of this report to interested congressional committees, the Secretary of Defense, the Secretary of the Air Force and the Under Secretary of Defense for Acquisition, Technology and Logistics. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. To determine the extent to which F-22A modernization met cost and schedule goals and operational requirements, we reviewed documentation of program plans and status, including cost estimates, briefings by program office officials to Department of Defense (DOD) and Air Force oversight officials, annual Selected Acquisition Reports, Defense Acquisition Executive Summary reports, Director, Operational Test and Evaluation (DOT&E) annual test result summaries, Defense Contract Management Agency (DCMA) program assessment reports, acquisition plans, operational requirements documentation, contract documentation, schedules and other data. We reviewed documentation of key decisions made on F-22A modernization, including acquisition decision memoranda and Joint Requirements Oversight Council memoranda. We reviewed F-22A cost performance report data, contract cost data, and budgetary documents. In assessing the achievement of cost goals by the F-22 modernization and other improvement efforts, we compared the program cost estimate from 2004, shortly before development began for Increment 2, with the latest available estimates. We determined what changes in planned capabilities occurred after modernization efforts began. In assessing the F-22A modernization’s achievement of schedule goals and delivery of planned capabilities, we identified progress made in delivering new capabilities in accordance with plans, and determined what factors contributed to schedule changes. We interviewed program office officials having knowledge of factors driving cost estimate and schedule changes over time. We also interviewed officials from the F-22A Program Office, DOD test organizations, and Air Combat Command to obtain their views on progress; ongoing concerns and actions taken to address them; and future plans to complete F-22A development procurement and operational testing. We used the latest cost data available during the period of our review; however the F-22A program office was preparing a new cost estimate for F-22A modernization and the estimated costs of increments beyond Increment 3.2B had not yet been determined or added to this estimate. To determine what progress has been made in completing developmental and operational testing, and resolving system deficiencies, we reviewed DOT&E annual test report summaries and briefings to DOD oversight and requirements officials. We reviewed summaries of recent operational test results provided by Air Force test officials and program risk information related to developmental and operational testing for F-22A modernization. We reviewed documentation of program decisions, including acquisition decision memoranda. We reviewed data from prior GAO reviews on operations and support costs for F-22A and other stealth aircraft, Selected Acquisition Reports, Defense Acquisition Executive Summary reports, contract documents, and program cost estimates. In assessing progress made in operational testing, we compared initial and current operational test plans to determine if significant changes were made after testing began. We identified relevant factors contributing to testing delays. In assessing the resolution of system deficiencies, we identified the number of successful test points flown during operational testing and identified what changes were made in requirements after operational testing began. We determined what key risks and issues remain that could affect developmental and operational testing in the future. We identified issues contributing to increased operations and sustainment costs and to decreased aircraft availability, and actions taken by the F- 22A program to mitigate them. We interviewed officials from the F-22A Program Office, DOD test organizations, and Air Combat Command to obtain their views on progress, ongoing concerns and actions taken to address them, and future plans to complete developmental and operational testing. At the time of our review, the final follow-on operational test and evaluation results for Increment 3.1 were not yet available and other test information we had requested was not readily available within the reporting period for this report due to its high classification level. Accordingly, our analysis of actual results and data was somewhat constrained and our reporting limited to providing summary level observations due to the classification level of some of the data. Notwithstanding, DOD officials gave us access to sufficient information to make informed judgments on the matters covered in this report. In performing our work, we obtained information and interviewed officials from the F-22A Program Office, Wright-Patterson Air Force Base, Ohio; Air Combat Command, Langley Air Force Base Virginia; Office of the Director, Operational Test & Evaluation, Office of the Secretary of Defense, Arlington, Virginia; and the Air Force Operational Test and Evaluation Center, Kirtland Air Force Base, New Mexico. We assessed the reliability of DOD and F-22A contractor data by (1) obtaining and reviewing related information from various sources, and (2) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. We conducted this performance audit from June 2011 to March 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Bruce Fairbairn, Assistant Director; Marvin Bonner; Sean Seales; Marie Ahearn; Ana Aviles; Laura Greifner; Travis Masters; and Roxanna Sun made key contributions to this report. Tactical Aircraft: Comparison of F-22A and Legacy Fighter Modernization Programs. GAO-12-524. Washington, D.C.: April 26, 2012. Joint Strike Fighter: Restructuring Places Program on Firmer Footing, but Progress Still Lags. GAO-11-325. Washington, D.C.: April 7, 2011. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-11-233SP. Washington, D.C.: March 29, 2011. Tactical Aircraft: DOD’s Ability to Meet Future Requirements Is Uncertain, with Key Analyses Needed to Inform Upcoming Investment Decisions. GAO-10-789. Washington, D.C.: July 29, 2010. Defense Management: DOD Needs Better Information and Guidance to More Effectively Manage and Reduce Operating and Support Costs of Major Weapon Systems. GAO-10-717. Washington, D.C.: July 20, 2010. Defense Contracting: DOD Has Enhanced Insight into Undefinitized Contract Action Use, but Management at Local Commands Needs Improvement. GAO-10-299. Washington, D.C.: January 28, 2010. Defense Acquisitions: Measuring the Value of DOD’s Weapon Programs Requires Starting with Realistic Baselines. GAO-09-543T. Washington, D.C.: April 1, 2009. GAO Cost Estimating and Assessment Guide. GAO-09-3SP. Washington, D.C.: March 2, 2009. Defense Acquisitions: A Knowledge-Based Funding Approach Could Improve Major Weapon System Program Outcomes. GAO-08-619. Washington, D.C.: July 2, 2008. Tactical Aircraft: DOD Needs a Joint and Integrated Investment Strategy. GAO-07-415. Washington, D.C.: April 2, 2007. Tactical Aircraft: DOD Should Present a New F-22A Business Case before Making Further Investments. GAO-06-455R. Washington, D.C.: April 26, 2006. Defense Acquisitions: Air Force Still Needs Business Case to Support F/A-22 Quantities and Increased Capabilities. GAO-05-304. Washington, D.C.: March 15, 2005.
The Air Force currently plans to spend $11.7 billion to modernize and improve reliability of the F-22A, its fifth generation air superiority fighter. GAO was asked to evaluate (1) cost and schedule outcomes and (2) testing results and risks going forward in the F-22A modernization program and related efforts. To do this, GAO examined the program’s budgets and schedule estimates over time and discussed any changes with program officials, and reviewed progress and results from developmental and operational testing, and plans to mitigate risks and resolve system deficiencies. fighter. Originally designed to counter air threats posed by the former Soviet Union, the post-Cold War era spurred efforts to add new missions and capabilities to the F-22A, including improved air-to-air and robust air-to-ground attack capabilities. In 2003, the Air Force established the F-22A modernization program to develop and insert new capabilities in four increments. GAO was asked to evaluate (1) cost and schedule outcomes and (2) testing results and risks going forward in the F-22A modernization program and related efforts. To do this, GAO examined the program’s budgets and schedule estimates over time and discussed any changes with program officials, and reviewed progress and results from developmental and operational testing, and plans to mitigate risks and resolve system deficiencies. Total projected cost of the F-22A modernization program and related reliability and maintainability improvements more than doubled since the program started–from $5.4 billion to $11.7 billion–and the schedule for delivering full capabilities slipped 7 years, from 2010 to 2017. The content, scope, and phasing of planned capabilities also shifted over time with changes in requirements, priorities, and annual funding decisions. Visibility and oversight of the program’s cost and schedule is hampered by a management structure that does not track and account for the full cost of specific capability increments. Substantial infrastructure costs for labs, testing, management, and other activities directly support modernization but are not charged to its projects. The Air Force plans to manage its fourth modernization increment as a separate major acquisition program, as defined in DOD policy and statutory requirements. Testing of new capabilities to ensure operational effectiveness and suitability is ongoing. Results to date have been satisfactory but development and operational testing of the largest and most challenging sets of capabilities have not yet begun. Going forward, major challenges will be developing, integrating, and testing new hardware and software to counter emerging future threats. Other risks are associated with greater reliance on laboratory ground tests and relocating an F-22A lab needed to conduct software testing. While modernization is under way, the Air Force has undertaken parallel efforts to improve F-22A reliability and maintainability to ensure life-cycle sustainment of the fleet is affordable and to justify future modernization investments. But the fleet has not been able to meet a key reliability requirement, now changed, and operating and support costs are much greater than earlier estimated. GAO recommends that DOD evaluate capabilities to determine if future F-22A modernization efforts meeting DOD policy and statutory requirements should be established as separate major acquisition programs. DOD concurred with our recommendation.
Twelve western states assess royalties on the hardrock mining operations on state lands. In addition, each of these states, except Oregon, assesses taxes that function like a royalty, which we refer to as functional royalties, on the hardrock mining operations on private, state, and federal lands. To aid in the understanding of royalties, including functional royalties, the royalties are grouped as follows: Unit-based is typically assessed as a dollar rate per quantity or weight of mineral produced or extracted, and does not allow for deductions of mining costs. Gross revenue is typically assessed as a percentage of the value of the mineral extracted and does not allow for deductions of mining costs. Net smelter returns is assessed as a percentage of the value of the mineral, but with deductions allowed for costs associated with transporting and processing the mineral (typically referred to as mill, smelter, or treatment costs); however, costs associated with extraction of the mineral are not deductible. Net proceeds is assessed as a percentage of the net proceeds (or net profit) of the sale of the mineral with deductions for a broad set of mining costs. The particular deductions allowed vary widely from state to state, but may include extraction costs, processing costs, transportation costs, and administrative costs, such as for capital, marketing, and insurance. Royalties, including functional royalties, often differ depending on land ownership and the mineral being extracted, as the following illustrates: For private mining operations conducted on federal, state, or private lands, Arizona assesses a net proceeds functional royalty of 1.25 percent on gold mining operations, and an additional gross revenue royalty of at least 2 percent for gold mining operations on state lands. Nine of the 12 states assess different types of royalties for different types of minerals. For example, Wyoming employs three different functional royalties for all lands: (1) net smelter returns for uranium, (2) a different net smelter returns for trona—a mineral used in the production of glass, and (3) gross revenue for all other minerals. Furthermore, the royalties the states assess often differ in the allowable exclusions, deductions, and limitations. For example, in Colorado, a functional royalty on metallic mining excludes gross incomes below $19 million, whereas in Montana a functional royalty on metallic mining is applied on all mining operations after the first $250,000 of revenue. Finally, the actual amount assessed for a particular mine may depend not only on the type of royalty, its rate, and exclusions, but also on such factors as the mineral’s processing requirements, mineral markets, mine efficiency, and mine location relative to markets, among other factors. Table 1 shows the types of royalties, including functional royalties, that the 12 western states assess on all lands, including federal, state, and private lands, as well as the royalties assessed only on state lands. It has been difficult to determine the number of abandoned hardrock mine sites in the 12 western states, and South Dakota, in part because there is no generally accepted definition for a hardrock mine site. The six studies we reviewed relied on the different definitions that the states used, and estimates varied widely from study to study. Furthermore, BLM and the Forest Service have had difficulty determining the number of abandoned hardrock mines on their lands. In September 2007, the agencies reported an estimated 100,000 abandoned mine sites, but we found problems with this estimate. For example, the Forest Service had reported that it had approximately 39,000 abandoned hardrock mine sites on its lands. However, this estimate includes a substantial number of non-hardrock mines, such as coal mines, and sites that are not on Forest Service land. At our request, the Forest Service provided a revised estimate of the number of abandoned hardrock mine sites on its lands, excluding coal or other non-hardrock sites. According to this estimate, the Forest Service may have about 29,000 abandoned hardrock mine sites on its lands. That said, we still have concerns about the accuracy of the Forest Service’s recent estimate because it identified a large number of sites with “undetermined” ownership, and therefore these sites may not all be on Forest Service lands. BLM has also acknowledged that its estimate of abandoned hardrock mine sites on its lands may not be accurate because it includes sites on its lands that are of unknown or mixed ownership (state, private, and federal) and a few coal sites. In addition, BLM officials said that the agency’s field offices used a variety of methods to identify sites in the early 1980s, and the extent and quality of these efforts varied greatly. For example, they estimated that only about 20 percent of BLM land has been surveyed in Arizona. Furthermore, BLM officials said that the agency focuses more on identifying sites closer to human habitation and recreational areas than on identifying more remote sites, such as in the desert. Table 2 shows the Forest Service’s and BLM’s most recent available estimates of abandoned mine sites on their lands. To estimate abandoned hardrock mine sites in the 12 western states and South Dakota, we developed a standard definition for these mine sites. In developing this definition, we consulted with mining experts at the National Association of Abandoned Mine Land Programs; the Interstate Mining Compact Commission; and the Colorado Department of Natural Resources, Division of Reclamation, Mining and Safety, Office of Active and Inactive Mines. We defined an abandoned hardrock mine site as a site that includes all associated facilities, structures, improvements, and disturbances at a distinct location associated with activities to support a past operation, including prospecting, exploration, uncovering, drilling, discovery, mine development, excavation, extraction, or processing of mineral deposits locatable under the general mining laws. We also asked the states to estimate the number of features at these sites that pose physical safety hazards and the number of sites with environmental degradation. Using this definition, states reported to us the number of abandoned sites in their states, and we calculated that there are at least 161,000 abandoned hardrock mine sites in their states. At these sites, on the basis of state data, we estimated that at least 332,000 features may pose physical safety hazards, such as open shafts or unstable or decayed mine structures. Furthermore, we estimated that at least 33,000 sites have degraded the environment, by, for example, contaminating surface and ground water or leaving arsenic- contaminated tailings piles. Table 3 shows our estimate of the number of abandoned hardrock mine sites in the 12 western states and South Dakota, the number of features that pose significant public health and safety hazards, and the number of sites with environmental degradation. As of November 2007, hardrock mining operators had provided financial assurances valued at approximately $982 million to guarantee the reclamation cost for 1,463 hardrock mining operations on BLM land in 11 western states, according to BLM’s Bond Review Report. The report also indicates that 52 of the 1,463 hardrock mining operations had inadequate financial assurances—about $28 million less than needed to fully cover estimated reclamation costs. We determined, however, that the financial assurances for these 52 operations should be more accurately reported as about $61 million less than needed to fully cover estimated reclamation costs. Table 4 shows total operations by state, the number of operations with inadequate financial assurances, the financial assurances required, BLM’s calculation of the shortfall in assurances, and our estimate of the shortfall, as of November 2007. The $33 million difference between our estimated shortfall of nearly $61 million and BLM’s estimated shortfall of nearly $28 million occurs because BLM calculated its shortfall by comparing the total value of financial assurances in place with the total estimated reclamation costs. This calculation approach has the effect of offsetting the shortfalls in some operations with the greater than required financial assurances of other operations. However, the financial assurances that are greater than the amount required for an operation cannot be transferred to an operation with inadequate financial assurances. In contrast, we totaled the difference between the financial assurance in place for an operation and the financial assurances needed for that operation to determine the actual shortfall for each of the 52 operations for which BLM had determined that financial assurances were inadequate. BLM’s approach to determining the adequacy of financial assurances is not useful because it does not clearly lay out the extent to which financial assurances are inadequate. For example, in California, BLM reported that, statewide, the financial assurances in place were $1.5 million greater than required as of November 2007, suggesting reclamation costs are being more than fully covered. However, according to our analysis of only those California operations with inadequate financial assurances, the financial assurances in place were nearly $440,000 less than needed to fully cover reclamations costs. BLM officials agreed that it would be valuable for the Bond Review Report to report the dollar value of the difference between financial assurances in place and required for those operations where financial assurances are inadequate and have taken steps to modify LR2000. BLM officials said that financial assurances may appear inadequate in the Bond Review Report when expansions or other changes in the operation have occurred, thus requiring an increase in the amount of the financial assurance; BLM’s estimate of reclamation costs has increased and there is a delay between when BLM enters the new estimate into LR2000 and when the operator provides the additional bond amount; and BLM has delayed updating its case records in LR2000. Conversely, hardrock mining operators may have financial assurances greater than required for a number of reasons; for example, they may increase their financial assurances because they anticipate expanding their hardrock operations. In addition, according to the Bond Review Report, there are about 2.4 times as many notice-level operations—generally, operations that cause surface disturbance on 5 acres or less—as there are plan-level operations on BLM land—generally operations that disturb more than 5 acres (1,033 notice-level operations and 430 plan-level operations). However, about 99 percent of the value of financial assurances is for plan-level operations, while 1 percent of the value is for notice-level operations. While financial assurances were inadequate for both notice- and plan-level operations, a greater percentage of plan-level operations had inadequate financial assurances than did notice-level operations—6.7 percent and 2.2 percent, respectively. Finally, over one-third of the number of all hardrock operations and about 84 percent of the value of all financial assurances are for hardrock mining operations located in Nevada. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Committee may have. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. For further information about this testimony, please contact Robin M. Nazzaro, Director, Natural Resources and Environment (202) 512-3841 or [email protected]. Key contributors to this testimony were Andrea Wamstad Brown (Assistant Director); Elizabeth Beardsley; Casey L. Brown; Kristen Sullivan Massey; Rebecca Shea; and Carol Herrnstadt Shulman. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The General Mining Act of 1872 helped open the West by allowing individuals to obtain exclusive rights to mine billions of dollars worth of gold, silver, and other hardrock (locatable) minerals from federal lands without having to pay a federal royalty. However, western states charge royalties so that they share in the proceeds from various hardrock minerals extracted from their lands. For years, some mining operators did not reclaim land used in their mining operations, creating environmental and physical safety hazards. To curb further growth in the number of abandoned hardrock mines on federal lands, in 1981, the Department of the Interior's Bureau of Land Management (BLM) began requiring mining operators to reclaim BLM land disturbed by these operations, and in 2001 began requiring operators to provide financial assurances to cover reclamation costs before they began exploration or mining operations. This testimony focuses on the (1) royalties states charge, (2) number of abandoned hardrock mine sites and hazards, and (3) value and coverage of financial assurances operators use to guarantee reclamation costs. It is based on two GAO reports: Hardrock Mining: Information on Abandoned Mines and Value and Coverage of Financial Assurances on BLM Land, GAO-08-574T (Mar. 12, 2008) and Hardrock Mining: Information on State Royalties and Trends in Imports and Exports, GAO-08-849R (July 21, 2008). Twelve western states, including Alaska, that GAO reviewed assess royalties on hardrock mining operations on state lands. In addition, each of these states, except Oregon, assesses taxes that function like a royalty, which GAO refers to as functional royalties, on the hardrock mining operations on private, state, and federal lands. The royalties the states assess often differ depending on land ownership and the mineral being extracted. For example, for private mining operations conducted on federal, state, or private land, Arizona assesses a functional royalty of 1.25 percent of net revenue on gold mining operations, and an additional royalty of at least 2 percent of gross value for gold mining operations on state lands. The actual amount assessed for a particular mine may depend not only on the type of royalty, its rate, and exclusions, but also on other factors, such as the mine's location relative to markets. Over the past 10 years, estimates of the number of abandoned hardrock mine sites in the 12 western states reviewed, as well as South Dakota, have varied widely, in part because there is no generally accepted definition for a hardrock mine site. Using a consistent definition that GAO provided, these states reported the number of abandoned sites in their states. On the basis of these data, GAO estimated that there are at least 161,000 abandoned hardrock mine sites in these states, and these sites have at least 332,000 features that may pose physical safety hazards and at least 33,000 sites that have degraded the environment. According to BLM data, as of November 2007, hardrock mining operators had provided financial assurances worth approximately $982 million to guarantee reclamation costs for 1,463 hardrock mining operations on BLM land and 52 of these operations had financial assurances valued at about $28 million less than needed to fully cover estimated reclamation costs. However, GAO determined that the assurances for these 52 operations should be more accurately reported as about $61 million less than needed for full coverage. The $33 million difference between GAO's and BLM's estimated shortfalls occurs because BLM calculated its shortfall by comparing the total value of financial assurances in place with the total estimated reclamation costs. This approach effectively offsets the shortfalls in some operations with the higher than needed financial assurances of others. However, the financial assurances that are greater than the amount required for an operation cannot be transferred to an operation with inadequate financial assurances. In contrast, GAO totaled the difference between the financial assurances in place for an operation and the financial assurances needed for that operation to determine the actual shortfall for each of the 52 operations for which BLM had determined that financial assurances were inadequate. BLM has taken steps to correct the reporting problem GAO identified.
Moisture is the primary factor leading to indoor mold growth. To grow indoors, mold also needs temperatures above freezing levels—from 32 to 130 degrees Fahrenheit—and organic matter. The nutrients upon which mold feeds are provided by house dust and many surface and construction materials, such as wallpapers, textiles, wood, paints, and glues. Because the appropriate temperature and necessary nutrients are common in homes, mold growth can rapidly occur indoors when excessive moisture or water accumulates as a result of, for example, floods and other natural disasters; building design or construction flaws; and poor building maintenance practices, such as not repairing leaking plumbing. Moist conditions indoors may also foster the growth of other organisms capable of causing adverse health effects, including bacteria, cockroaches, and dust mites. Mold growth may be particularly severe following natural disasters such as hurricanes and flooding. The extent of the flooding after Hurricanes Katrina and Rita in 2005 led to conditions supporting widespread mold growth. Unlike other hurricane-impacted areas, where residents could access their buildings relatively quickly after the flood event, many residents in New Orleans were unable to access buildings for several weeks because of prolonged flood inundation. According to a CDC survey, an estimated 46 percent of homes in New Orleans and surrounding areas had visible mold growth. Widespread indoor mold contamination can cause adverse health effects in returning residents and make it more difficult to rehabilitate houses for reoccupation. For example, in 2006 the Army Corps of Engineers noted that because of mold problems caused by the extensive flooding, many residences that did not require demolition would nonetheless need to be gutted—stripping the walls down to the studs—before they could be renovated. The Institute of Medicine has identified four possible levels of connection between indoor mold and adverse health effects: sufficient evidence of a causal relationship, sufficient evidence of an association, limited or suggestive evidence of an association, and inadequate or insufficient evidence to determine whether an association exists. According to HHS, establishing a causal relationship with adequate certainty requires several types of evidence, including (1) epidemiologic associations, (2) experimental exposure in animals or humans that leads to the symptoms and signs of the disease in question, and (3) reduction in exposure that leads to reduction in the symptoms and signs of the disease. HHS officials said that more data are needed to establish a causative association between exposure to mold and some illnesses because the vast majority of the studies conducted to date have been only epidemiologic. The federal government has responded to the uncertainty surrounding the health effects of exposure to indoor mold by, among other things, sponsoring reviews of the available scientific evidence. Committees of the National Academies’ Institute of Medicine have produced two reports in the past several years that relate to the health effects of exposure to indoor mold. For a 2000 report requested by EPA, Clearing the Air: Asthma and Indoor Air Exposures, the Institute of Medicine assembled a multidisciplinary committee to examine the relevant research pertaining to asthma and the indoor environment, including, among many other issues, the possible impact of indoor mold on asthma prevalence. For its 2004 report requested by the CDC, Damp Indoor Spaces and Health, another Institute of Medicine committee reviewed the scientific literature to determine the connections among damp indoor spaces, microorganisms such as mold, and a variety of human health effects. This committee used a uniform set of categories to summarize its conclusions regarding the evidence of association between various health outcomes and exposure to indoor dampness or the presence of mold or other agents in damp indoor environments. While research in this field continues to evolve, both reports made recommendations for additional research related to mold and other areas that remain relevant—that is, the data gaps have not been resolved. In addition to sponsoring reviews of the available scientific evidence, federal agencies have the opportunity to share information on various aspects of indoor air quality, including mold, through the Federal Interagency Committee on Indoor Air Quality. Title IV of the Superfund Amendments and Reauthorization Act of 1986 directed EPA, among other things, to disseminate the results of its indoor air quality research program and establish an advisory committee consisting of other federal agencies. EPA serves as the executive secretary of the Federal Interagency Committee on Indoor Air Quality, which fulfills this advisory role. The committee is co-chaired by EPA, the Department of Energy (DOE), the Consumer Product Safety Commission, the National Institute for Occupational Safety and Health (NIOSH), and the Occupational Safety and Health Administration (OSHA). Other federal departments and agencies participate in the committee as members. In 1991, we recommended that the Administrator, EPA, work with other members of the committee to clearly define in a charter the roles and responsibilities of the agencies participating in the committee in order to strengthen interagency coordination of indoor air research. However, EPA has not implemented this recommendation. Although federal agencies are engaged in a number of efforts to address indoor mold, there are no federal or generally accepted health-based standards for safe levels of mold, its components, or its products in the air or on surfaces. In fact, neither EPA nor OSHA has established health- based standards for airborne concentrations of mold or mold spores indoors. Similarly, NIOSH has not set recommended exposure limits for indoor mold or mold spores. Further, according to EPA officials, the lack of federal regulation of airborne concentrations of mold indoors is largely attributable to the insufficiency of data needed to establish a scientifically defensible health-based standard. EPA officials also emphasized that the agency lacks the authority to establish airborne concentration limits for mold indoors. Legislation to require EPA to take action with respect to indoor mold has been introduced in Congress in the past but was not enacted. For example, the proposed United States Toxic Mold Safety and Prevention Act, most recently introduced in Congress in 2005, would have directed EPA to promulgate standards for preventing, detecting, and remediating indoor mold growth, among other things. The presence of mold in homes and workplaces has led to numerous lawsuits alleging personal injury or property damage. To obtain a judgment that mold has caused personal injury, an individual must persuade the court that the type of mold at issue is capable of causing the individual’s condition and that the mold actually caused the condition in the specific case. Litigants generally use expert witness testimony in an attempt to prove or disprove these points in court. Courts use different standards to judge whether such testimony is admissible. In some states, courts will admit such testimony only if it is in accord with generally accepted consensus of the relevant scientific community. In other states and in the federal courts, judges independently evaluate the reliability of the evidence by weighing several factors, only one of which focuses on the views of the relevant scientific community. Many state courts use a mixture of these two methods. Insurance companies are frequently defendants in mold litigation, and in response to the rise in cases early in the decade, many began changing their policies to specifically exclude mold-related injuries and property damage from coverage. For example, many insurance policies now contain language stating that the insurance company “will not pay for loss or damage caused by or resulting from ... rust, corrosion, fungus, decay,” and other conditions. As of 2006, the insurance regulatory agencies in 40 states had approved mold-related exclusions. Partly in response to a significant increase in mold litigation in the early part of this decade, states began enacting legislation to address various aspects of the mold problem. For example, in 2001 California enacted the Toxic Mold Protection Act, which requires the state’s Department of Health Services to establish permissible mold exposure limits for indoor air. In addition, in 2003, Texas passed legislation requiring a mold remediation contractor to certify to a homeowner that the mold contamination identified for the project had been remediated as outlined in the mold management plan or remediation protocol. Further, the Texas law requires owners selling property to provide buyers with copies of each mold remediation certificate issued for the properties the 5 preceding years. Examples of other state legislative responses to mold issues include laws requiring landlords to disclose to tenants information about the health hazards associated with exposure to indoor mold; prohibiting litigation against a real estate agent acting on behalf of a buyer or seller who has truthfully disclosed any known material defects; establishing licensing requirements for individuals involved with mold assessment and remediation; and creating a group to study the effects of toxic mold. While the 2004 Institute of Medicine report, and reviews of the scientific literature published subsequently, have found evidence associating indoor mold with certain adverse health effects, the evidence supporting an association between mold and other health effects remains less certain. Two factors, in particular, pose challenges for those attempting to determine the health effects of exposure to indoor mold: valid quantitative methods of measuring exposure are lacking, and a wide variety of other potential disease-causing agents are likely to be present in damp indoor environments, along with mold. According to the Institute of Medicine and recent reviews of the scientific literature, further research is required to advance the understanding of the relationships between dampness, indoor mold, and human health. The 2004 Institute of Medicine report, Damp Indoor Spaces and Health, found sufficient evidence of an association between exposure to indoor mold and certain adverse health effects—that is, an association between the agent and the outcome has been observed in studies in which chance, bias, and confounding factors can be ruled out with reasonable confidence. These health effects include upper respiratory tract symptoms, including nasal congestion, sneezing, runny or itchy nose, and throat irritation; exacerbation of pre-existing asthma; hypersensitivity pneumonitis in susceptible persons; and fungal colonization or opportunistic infections in immune-compromised persons. Of these health effects, the upper respiratory tract symptoms associated with allergic rhinitis are the most common, according to the American Academy of Pediatrics. In addition, the association between indoor mold and exacerbation of asthma symptoms is a particularly significant public health concern because asthma is the most common chronic illness among children in the United States and one of the most common chronic illnesses overall, according to the Institute of Medicine’s 2000 report, Clearing the Air: Asthma and Indoor Air Exposures. Importantly, mold can affect certain populations disproportionately. For example, the 2004 Institute of Medicine report found sufficient evidence of an association between exposure to the mold genus Aspergillus and serious respiratory infections in people with severely compromised immune systems (such as chemotherapy patients and organ transplant recipients). This report also found sufficient evidence of an association between exposure to indoor mold and hypersensitivity pneumonitis—a relatively rare but potentially serious allergic reaction—in susceptible persons. In addition to these more established health effects, this report also found limited or suggestive evidence of an association between indoor mold and lower respiratory illness (for example, bronchitis and pneumonia) in otherwise healthy children. Most of the 20 reviews of the scientific literature published from 2005 to 2007 that we examined generally agreed with the conclusions of the 2004 Institute of Medicine report. However, two of the reviews characterized the relationship between exposure to indoor mold and certain of the above health effects more strongly. The American Academy of Pediatrics stated in its 2006 report that epidemiologic studies consistently support causal relationships between exposure to mold and upper respiratory tract symptoms and exacerbation of pre-existing asthma. The American Academy of Pediatrics also said that epidemiologic studies support a causal relationship between exposure to mold and hypersensitivity pneumonitis in susceptible persons. Moreover, a 2007 meta-analysis sponsored by EPA and DOE found that building dampness and mold are associated with increases of 30 percent to 50 percent in a variety of health outcomes, such as upper respiratory tract symptoms, wheeze, and cough. The authors concluded that these associations strongly suggest these adverse health effects are caused by dampness-related exposures. According to the 2004 Institute of Medicine report, the evidence of an association between exposure to indoor mold and a variety of other health effects, however, is inadequate or insufficient—that is, the available studies are of insufficient quality, consistency, or statistical power to permit a conclusion regarding the presence of an association. The health effects for which there is inadequate or insufficient evidence of an association with indoor mold include acute idiopathic pulmonary hemorrhage in infants; airflow obstruction in otherwise-healthy persons; cancer; chronic obstructive pulmonary disease; inhalation fevers not related to occupational exposures; lower respiratory illness in otherwise-healthy adults; mucous membrane irritation syndrome; rheumatologic and other immune diseases; shortness of breath; and skin symptoms. Most of the recent reviews of the literature we examined generally concurred with these Institute of Medicine conclusions as well, although a few found a somewhat stronger relationship between indoor mold and certain of the health effects listed above. For example, a 2007 review concluded that dampness and exposure to indoor mold can exacerbate or may cause shortness of breath, among other health effects. In addition, other reviews differed in their conclusions regarding the link between exposure to indoor mold and acute idiopathic pulmonary hemorrhage in infants, the sudden onset of pulmonary hemorrhage in a previously healthy infant. This condition was reported among a group of infants from the same part of Cleveland, Ohio, in the 1990s and attributed by some researchers to exposure to indoor mold. Five of the reviews we examined contained conclusions about acute idiopathic pulmonary hemorrhage in infants and children. Two concluded that mold has not been proven to cause this condition. However, a third review—the American Academy of Pediatrics 2006 report—said that although a causal relationship has not been firmly established, a variety of studies have provided some evidence that such a relationship is plausible. The fourth review said that the association between acute idiopathic pulmonary hemorrhage in infants and children and mold is strong enough to justify removing them from moldy environments or cleaning up these spaces, and the fifth review reiterated this recommendation. Some of the health effects for which the evidence remains unclear (for example, fatigue and acute idiopathic pulmonary hemorrhage in infants) have been attributed to reactions to toxins, or “mycotoxins,” that can be produced by certain types of mold that grow indoors. The reviews we examined were largely consistent in their interpretations of the evidence for the role of mycotoxins in relation to adverse health effects. The Institute of Medicine reported in 2004 that (1) exposure to mycotoxins can occur via inhalation, contact with the skin, and ingestion of contaminated food and (2) research on Stachybotrys chartarum (a species of indoor mold that can produce mycotoxins) suggests that effects in humans may be biologically plausible. However, the report also noted that the effects of chronic inhalation of mycotoxins require further study and that additional research must confirm the observations on Stachybotrys chartarum before a more definitive conclusion can be drawn. Among the more recent reviews we examined that specifically addressed mycotoxins, five reached a similar conclusion—that is, that the current evidence is inconclusive or limited. However, one review suggested that it is likely that mycotoxins play some role in building-related disease, including exacerbation of pre- existing asthma. On the other hand, another recent review cast doubt on the health effects of mycotoxins in one set of circumstances—specifically, the review concluded that it was improbable for mycotoxins to cause negative health effects through a toxic mechanism when individuals inhale mycotoxins in nonoccupational settings (such as homes). This review, however, explicitly stated this conclusion did not address adverse health effects of mycotoxins that may be caused by immune-mediated mechanisms or stem from exposure in occupational settings or by ingestion. According to the 2004 Institute of Medicine report, two key issues largely contribute to the scientific data gaps regarding the relationship between mold and adverse health effects: (1) valid quantitative methods of measuring exposure are lacking, and (2) a wide variety of potential disease-causing agents are likely to be present in damp indoor environments, which makes it difficult to link health effects with specific agents. Without standardized, quantitative methods to measure exposure, it is difficult to compare exposure levels across studies or between individuals with and without symptoms of adverse health effects. This makes it challenging to draw valid and consistent conclusions on the health effects of indoor mold. No single or standardized method to measure the magnitude of exposure to mold has been developed. Consequently, researchers use a variety of methods to assess exposure, each of which has advantages and disadvantages. For example, most studies use an indirect method to assess exposure—occupant questionnaires about the presence of dampness or mold in a building—according to the 2004 Institute of Medicine report. Other exposure assessment methods include personal monitoring, which involves measuring agent concentrations with monitors carried by individuals, and quantifying biologic response markers in bodily fluids. Another method of exposure assessment is to collect environmental samples of indoor air, dust, or building materials such as wallboard and quantitatively analyze the presence of mold (or its components or products) in the samples. In addition to the various methods that can be used to collect and analyze samples, environmental sampling for mold is complicated by the fact that concentrations of mold (particularly in the air) can vary over time and across an indoor environment. Moreover, many newly developed sampling methods are not commercially available or well-validated. The second issue contributing to limitations in the understanding of the relationship between mold and a number of adverse health effects is the variety of potential disease-causing agents—including many species of mold and other biological agents, such as bacteria or dust mites—that are likely to be present in damp indoor environments. The number of such agents makes it difficult to know which ones are specifically responsible for the adverse health effects attributed to these environments. For example, of the approximately 1 million species of mold, there are about 200 species of mold to which humans are routinely exposed, although not all of these are commonly identified in indoor environments, and not all types pose the same hazards to human health. The mold genus Alternaria, for instance, which has been found in moldy building materials, has been linked to severe asthma. Furthermore, several different components or products of mold, such as mycotoxins, may function as disease-causing agents in indoor environments. The release of these mold components or products varies with environmental and other factors, and the individual roles they may play in adverse health effects are not fully understood. People are also exposed to mold in outdoor environments, where the concentrations, while they vary considerably, are usually higher than those found indoors. While the specific species of mold that grow indoors may differ from those found outdoors, the potential for outdoor exposure further complicates efforts to determine the relationship between adverse health effects and indoor exposure to mold. In addition to mold, damp indoor areas can support other biological agents that may result in adverse health effects, including bacteria, dust mites, cockroaches, and rodents. Dust mites, for example, are known to cause the development of asthma. Damp conditions may also lead to potentially harmful chemical emissions from building materials and furnishings. For example, excessive indoor humidity may increase the release of formaldehyde, a probable human carcinogen, from building materials such as particle board. Exposure to formaldehyde has been linked to some of the same health effects that have been attributed to indoor mold, such as wheezing, coughing, and exacerbation of asthma symptoms, as well as more severe effects. The 2000 and 2004 Institute of Medicine reports and other recent reviews of the scientific literature have identified numerous areas where further research is required to advance the understanding of the relationships between dampness, indoor mold, and human health. Specifically, the health effects of the components and products of mold require further study. The effects of mycotoxins in particular remain poorly understood, partly because most of the toxicologic studies on mycotoxins have examined the acute (or short-term) effects of high levels of exposure to mycotoxins in small populations of animals. To address these limitations, the 2004 Institute of Medicine report recommended that studies be conducted to help determine, among other things, (1) the effects of chronic (or long-term) exposures to mycotoxins via inhalation and (2) the dose of mycotoxins required to cause adverse health effects in humans. This report also recommended research on a particular species of toxin- producing mold, Stachybotrys chartarum, and on the relationship between mold and dampness and acute idiopathic pulmonary hemorrhage in infants. In its 2000 report, the Institute of Medicine also called for additional research related to mold particles as allergens and research to evaluate the association of dampness and mold with the development of asthma. As can be expected as research progresses over time, some of the more recent reviews we examined made additional or more specific research recommendations related to mycotoxins and other components and products of mold. A number of lawsuits alleging serious health effects as a result of exposure to indoor mold have involved exposure to mycotoxins, underscoring the need for additional research in this area. In addition, research to develop, improve, and standardize methods for assessing exposure to mold is a high priority for understanding the health effects of mold, according to the Institute of Medicine’s 2004 report. Specifically, the report recommends additional research to validate and refine existing exposure assessment methods for mold, including procedures for collecting and analyzing environmental samples. Such research would facilitate comparison of results within and across epidemiological studies and help better define the relationships between mold and adverse health effects. In addition, improved methods for measuring exposure to specific components of mold would help efforts to study the roles of these agents in causing adverse health effects. The 2004 Institute of Medicine report also identified the need for additional research on mold mitigation strategies and measures to prevent or reduce dampness, the growth of indoor mold, and exposure to mold. These strategies could include remediation activities, building renovation, and changes in building operation or maintenance practices. For example, research is needed to develop standardized, effective cleanup methods to mitigate mold growth after flooding and other catastrophic water events. In addition, the 2004 Institute of Medicine report recommended research to assess how effectively personal protective equipment, such as gloves, safety goggles, and respirators, reduces exposure to mold during mitigation activities. Research in these areas is important to help ensure that (1) mold mitigation actually improves unhealthy conditions in indoor environments and (2) protective equipment used during remediation successfully reduces the amount of mold to which workers and building occupants are exposed. Federal research activities address gaps in scientific data on the health effects of indoor mold identified by the Institute of Medicine to varying degrees, with a large number focusing on two areas in particular—asthma and measurement methods. The impact of this research portfolio may be reduced, however, by limited planning and coordination. EPA, HHS, and HUD officials reported that they were conducting or sponsoring 65 mold research activities as of October 1, 2007: HHS reported 43 ongoing research activities; and EPA and HUD reported 15 and 7, respectively. The Institute of Medicine’s 2000 and 2004 reports identified a number of gaps in the research needed to more clearly delineate any association between exposure to indoor mold and a number of adverse health effects. As shown in appendix III, these gaps may be grouped into 15 broad categories. Agency officials reported that most of the individual federal research activities address 2 or more of the 15 data gaps. Collectively, the agencies indicated that their research activities address all of the 15 data gaps to varying extents—the number of research activities addressing individual gaps ranged from 1 to 32 (see app. III). Moreover, EPA, HHS, and HUD officials reported that 75 percent of their mold research activities address at least one of five particular data gaps— three of which relate to asthma, and two of which relate to sampling and measurement methods. These five data gaps are as follows: Identify environmental factors that either lead to the development of asthma or precipitate symptoms in subjects who already have asthma using good measures of fungal exposure. Determine the association of dampness problems with asthma development and symptoms by researching the causative agents (e.g., molds, dust mite allergens) and documenting the relationship between dampness and allergen exposure. Advance the understanding of specific bioaerosols (small airborne particles) in relation to asthma by studying the epidemiology of building- related asthma in problem buildings where there are excess chest complaints among occupants in comparison to buildings where there are not complaints; or provide exposure-response studies of many building environments and populations. Improve sampling and exposure assessment methods for mold and its components (for example, by conducting research that will lead to standardization of protocols for sample collection, transport, and analysis or developing or improving methods of personal airborne exposure measurement, DNA-based technology, or assays for bioaerosols, etc.). Develop standardized metrics and protocols to assess the nature, severity, and extent of dampness and effectiveness of specific measures for dampness reduction. Overall, agency officials reported that 38 of the ongoing projects—or nearly 60 percent—address asthma. In this respect, the federal mold research portfolio for EPA, HHS, and HUD, ongoing as of October 1, 2007, appears to be weighted toward addressing research gaps identified in the Institute of Medicine’s 2000 report, Clearing the Air: Asthma and Indoor Air Exposures. The research activities federal officials reported as addressing one or more of the asthma-related research gaps include studies using animals. For example, one focuses on gestational exposure in mice to mold extracts and the effect this exposure has on the development of allergy or asthma in adult life; one assesses in mice the relative allergenic potency of molds statistically more common in water- damaged homes; and another is developing animal models (using mice and rats) to evaluate the pulmonary inflammatory response to mold products collected from indoor dust samples from buildings where people have reported respiratory symptoms and from buildings with no reported health complaints. Other asthma-related research activities are aimed, for example, at better understanding the relationship between respiratory symptoms and exposure to water-damaged homes in posthurricane New Orleans and at evaluating the respiratory health of staff and students attending schools that expose them to varying degrees of dampness. (Summaries of the 65 research activities conducted or sponsored by EPA, HHS, and HUD are provided in a supplement to this report—see GAO-08- 984SP.) Many of the projects that address asthma also address sampling and measurement methods. Research that provides high-quality, consistent methodologies for sampling and measuring mold is essential to progress in evaluating the health effects of exposure to mold. For example, the Institute of Medicine reported in 2004 that evidence of an association between exposure to mold and 15 specific health effects is inadequate or insufficient to permit a conclusion regarding the presence of an association because of the insufficient quality, consistency, or statistical power of the available studies. This report, Damp Indoor Spaces and Health, identified the need for standardized metrics and protocols. The Institute’s earlier 2000 report that focused on asthma had previously identified the need to improve exposure assessment methods for mold. Overall, EPA, HHS, and HUD reported 36 research activities that address sampling and exposure assessment methods or standardized metrics and protocols. While a number of the research activities address these measurement methods as part of investigations focusing on specific health effects or other issues related to indoor mold, several focus solely or primarily on developing measurement methods. For example, HHS’s NIOSH is working to develop biomarkers of mold exposure to lead to objective, standardized measures of exposure to support reproducible and comparable analyses in health studies, including large-scale epidemiological studies. HHS’s National Institute of Environmental Health Sciences has three separate studies: (1) evaluating available biomarkers of exposure and effect for specific molds that may cause systemic toxicity, (2) developing tests for allergenic mold species and toxin-producing molds found in water-damaged homes that can be used to objectively assess mold exposure in buildings, and (3) testing the feasibility of a flexible and low-cost measurement method for allergens, including mold. Another example of ongoing research focusing on mold identification is HHS’s CDC work to develop and validate DNA-based methods for identification and fingerprinting medically important molds because “the absence of a robust species/strain identification scheme has hampered the rapid identification of novel species and the associated burden of disease.” EPA and HUD also reported working on DNA-based assessment methods. Specifically, agency officials reported ongoing work using, in part, a DNA- based method for analyzing 36 species of mold that EPA developed, patented, and has licensed commercial laboratories to perform. Working with HUD, EPA used this method to develop a standard sampling and analytic process that then led to the development of the Environmental Relative Moldiness Index (ERMI) scale for U.S. homes. According to EPA, this index provides a simple, objective evaluation of the mold burden in a home. EPA reported ongoing epidemiological studies using the ERMI scale aimed at determining if the ERMI values can be used to understand the risk of asthma or related respiratory symptoms. While most of the 65 ongoing research activities involving indoor mold are addressing asthma and critical data gaps in sampling and measurement methods identified in the 2000 and 2004 Institute of Medicine reports, some other important data gaps identified in the 2004 report are being studied to a lesser degree than the gaps identified in the 2000 report. Notably, of the 15 data gaps identified in these reports, agency officials reported that only 9 research activities address to some extent 3 of the gaps identified in the 2004 report that follow. Research the relationship between mold and dampness and acute pulmonary hemorrhage or hemosiderosis in infants. Determine the effects of human exposure to Stachybotrys chartarum in indoor environments. Determine, for mycotoxins, the dose required to cause adverse health effects in humans via inhalation and skin (dermal) exposure; techniques for detecting and quantifying mycotoxins in tissues; or the effects of long- term (chronic) exposures to mycotoxins via inhalation. Officials from EPA, HHS, and HUD reported only one research activity examining the relationship between mold and dampness and acute pulmonary hemorrhage or hemosiderosis in infants—a rare but serious health condition whose relation to exposure to indoor mold remains unsettled, as discussed earlier. This research is aimed at developing quantitative biomarkers for the toxin-producing mold species Stachybotrys chartarum—a mold that has been implicated in cases of acute pulmonary hemorrhage in infants—to facilitate epidemiological and other studies examining mold-related health effects. Sponsored by HHS’s National Institute of Environmental Health Sciences, this research will support but does not directly address the 2004 Institute of Medicine’s recommendation for research on the relationship between mold and dampness and acute pulmonary hemorrhage in infants. Specifically, the Institute of Medicine report concluded that the role of Stachybotrys chartarum in cases of acute idiopathic pulmonary hemorrhage in infants that had been studied remained controversial and encouraged HHS’s CDC to pursue surveillance and additional research on the issue to resolve outstanding questions because this condition has serious health consequences. The Institute of Medicine further stated that epidemiologic and case studies should take a broad-based approach to gather and evaluate information on exposures and other factors that would help identify the causes of acute idiopathic pulmonary hemorrhage in infants, including dampness and agents associated with damp indoor environments and environmental tobacco smoke, among others. According to CDC officials, the agency is not currently conducting either epidemiological or case studies on acute pulmonary hemorrhage in infants. Five research activities that federal agencies reported were addressing the toxin-producing mold species Stachybotrys chartarum were: part of two studies on asthma; a study to develop tests for allergenic mold species and toxin-producing molds found in water-damaged homes and a study to develop quantitative biomarkers to assist epidemiological and other research examining mold-related health effects (both discussed above as also addressing other data gaps); and a follow-up study analyzing archived serum and house dust samples for Stachybotrys chartarum and related mycotoxins in the context of the clinical symptom profiles previously gathered on the study participants. The research gap on the health effects of exposure to mycotoxins—toxins that can be produced by certain types of mold and may potentially cause adverse health effects—is being addressed to some extent by four research activities, according to agency officials. One of the activities will assess the potential for molds found in damp or water-damaged buildings to cause nervous system or systemic toxicity. A second activity aims to develop improved sensors for detecting mycotoxins in contaminated food and feed to support proper remedial actions. A third activity is using an animal model to understand the disease pathogenesis of hypersensitivity pneumonitis—a relatively rare but potentially serious allergic reaction in susceptible persons that can, in its chronic form, result in permanent lung damage. Lastly, a fourth activity is a study of the mechanistic indicators of childhood asthma that uses air, biologic and clinical measures as well as molecular biology, chemistry, and gene technologies to identify factors that affect individual susceptibility to asthmatic responses. EPA reported that while this study is not directed at mold per se, the secondary data being collected could address some other research activities that the Institute of Medicine reports identified as relating to sampling and exposure assessment and mycotoxins, among others. Finally, EPA and HHS reported they had completed 42 mold-related research activities between January 1, 2005, and September 30, 2007. In general, these activities address topics such as asthma and sampling and measurement methods, reflected in the portfolio of agencies’ ongoing research activities. Information on the recently completed research activities is provided in a supplement to this report (see GAO-08-984SP). While the information on research activities relating to the health effects of exposure to indoor mold provides some insight into the extent to which federal agencies are addressing scientific data gaps identified by the Institute of Medicine in 2000 and 2004, the extent to which these ongoing research activities will effectively advance scientific knowledge in these areas is not clear. Specifically, the research is not guided by an overarching strategic plan or entity that would help agencies work together to identify their research priorities on the health effects of mold. Instead, agencies generally determine independently which research activities they will support using a variety of criteria. This lack of clearly articulated, common research goals is exacerbated by the limited intra- and inter-agency planning and coordination of research activities among federal agencies. Specific information that highlights planning and coordination limitations follows. Selection criteria for research the agencies sponsor are not always linked to identified data gaps. Several EPA, HHS, and HUD officials indicated that selection of priorities for research can be based on various considerations, including agency expertise in a particular area or input from external stakeholders. For example, both HHS and HUD officials noted that ideas for research priorities can come from former grantees. A key planning document that several EPA officials reported consulting is now outdated. Specifically, the agency’s 2005 Program Needs for Indoor Environments Research document, which outlines the agency’s research needs for the indoor environment and mold, among other topics, reflects input from the Institute of Medicine’s 2000 report but not the more recent 2004 report, which also identified a number of important data gaps. EPA officials told us that the agency’s research related to asthma and mold’s health effects has been a priority, in part, because this topic was identified in the 2000 Institute of Medicine report, Clearing the Air: Asthma and Indoor Air Exposures. Some officials stated that the 2004 Institute of Medicine report on indoor mold has not influenced their research priorities on this topic. While officials at HHS’s NIOSH reported that the Institute of Medicine’s 2004 report had a “major impact” on what indoor environmental quality research their institute conducts, HHS officials from two of the National Institutes of Health noted that this report did not affect their institutes’ internal priorities in this area. One official stated that while the publication of this report did not change any of their internal priorities, it may have encouraged external interest in mold research. The process that NIH uses to fund outside research may also limit the extent to which identified data gaps are addressed. Specifically, federal officials from three different NIH institutes that sponsored 29 of the 65 ongoing research activities as of October 1, 2007, reported that 19 were unsolicited—that is, they were initiated by investigators outside the institutes. Most NIH-funded research is initiated by such investigators. These investigators submitted research proposals that were of interest to them and thus were not necessarily responsive to specific agency priorities. Along these lines, officials at one institute said they generally fund indoor mold research only because of outside investigators’ interest. Unsolicited proposals are ranked for funding through a rigorous peer- review process for, among other things, scientific merit and the significance of the research. While the specific topic of the research is considered in light of its potential impact on public health during peer review, NIH officials said that specific gaps identified in the Institute of Medicine’s report may well have a lower significance relative to the three institutes’ many other scientific priorities. That is, while the three institutes do solicit research on areas considered to be priorities, studies on the health effects of exposure to indoor mold have generally not been in this category. Less than half of the agencies’ 65 ongoing research activities are being coordinated, either within or outside their agencies. Specifically, in responding to our survey of ongoing research activities involving the health effects of indoor mold, EPA, HHS, and HUD reported that 28 of their 65 research activities are being coordinated (see fig. 1). In other work, we identified practices that agencies should use when coordinating their activities, including (1) defining and articulating a common outcome, (2) identifying and addressing needs by leveraging each others’ resources, and (3) agreeing on agency roles and responsibilities. Especially when agencies are conducting research activities addressing the same data gap, coordination is important to ensure inappropriate duplication of efforts does not occur and to best leverage limited federal resources. Even in these cases, however, a significant number of activities are not being coordinated. For example, of the 32 EPA, HHS, and HUD research activities seeking to identify which environmental factors, such as mold, contribute to the development or exacerbation of asthma, federal officials reported that 18 activities are not being coordinated. Similarly, agencies are not coordinating on 22 of 36 research activities related to sampling and measurement methods. Further, the coordination activities reported by federal officials vary widely. In some cases, the federal officials we surveyed reported internal and external coordination on a specific research activity. For example, an EPA official noted that his unit conducted one of its research activities in conjunction with another unit within the agency, provided updates regarding the activity to another unit, and collaborated with another federal agency to write papers based on this research. Coordination was more limited in other cases. Specifically, in many cases, research activities were only coordinated within the agency—and often, with only one other unit within the agency. For example, one NIOSH official reported that, for one activity, his unit coordinated with another unit within NIOSH by supplying certain instruments. Importantly, while agencies sometimes coordinate on individual research activities, we did not identify any sustained efforts to coordinate agencies’ indoor mold research priorities. In the few instances in which officials reported that they coordinated with others on research priorities, it appeared that these partnerships did not specifically address mold-related priorities. For example, while EPA officials told us that they recently met with officials from HHS’s CDC to discuss mutual research opportunities related to the indoor environment, these meetings did not address mold research priorities. Federal agencies are not using the existing Federal Interagency Committee on Indoor Air Quality as a forum to coordinate their research activities on indoor mold. As discussed earlier, EPA serves as the executive secretary of the Federal Interagency Committee on Indoor Air Quality. We found that the committee addresses federal research activities on indoor air quality on an informal basis. For example, our analysis of the minutes of the 11 committee meetings from February 2005 to February 2008 shows that agency priorities related to indoor air quality research, which could include research on mold, were discussed only a few times. In one case, EPA officials described how their agency had developed its research needs on indoor environments, which it published in a document later in 2005 titled Program Needs for Indoor Environments Research. In this case, EPA was not seeking input from other agencies on research needs and priorities but rather was informing other agencies of decisions EPA had made. Moreover, EPA, HHS, and HUD officials who participate in committee meetings told us that they had not discussed or sought input on their agency’s mold-related research priorities during committee meetings. Further, according to committee meeting minutes, the information agency officials share at committee meetings regarding their mold research is limited to describing selected ongoing activities and issues related to their funding. When mold-related research was discussed during the 3-year period we reviewed, it was usually to provide an update on the status of some individual research projects. In several instances, officials also used the meetings to advertise the availability of funding for research on indoor air quality issues, which could include research on mold, or to announce the funding of mold- related research. Currently, the committee holds 2-1/2 hour meetings in person and by conference call three times a year that interested parties outside the federal government can access. The agendas for the meetings are based on input to EPA from member and nonmember agencies who propose topics they would like to discuss. According to officials from one of the participating agencies, the Consumer Product Safety Commission, the Federal Interagency Committee on Indoor Air Quality had more substantive discussions in the past on research projects, funding, and which research priorities needed to be addressed than it does now. The role of the Federal Interagency Committee on Indoor Air Quality has changed over time. Established in response to congressional committee direction in 1983, the committee, according to an EPA report, was to (1) coordinate federal indoor air quality research; (2) provide for liaison and the exchange of information on indoor air quality research among federal agencies, and with state and local governments, the private sector, the general public, and the research community; and (3) develop federal responses to indoor air quality issues. According to a 1988 report on the structure and operation of the committee, the committee comprised 16 member agencies and was co-chaired by EPA, the Consumer Product Safety Commission, DOE, and HHS. This report noted that considerable agreement existed among member agencies that the primary role of the committee was to coordinate federal indoor air activities. Further, coordination activities were specified to include joint project planning and implementation; contributions to and review of member agency indoor plans, reports, and publications; communication on technical and nontechnical issues and activities; and advising on, and fostering multiagency participation in, indoor air program and research activities of individual agencies. The committee met quarterly and had standing work groups covering indoor air quality research areas to address a diverse range of indoor air quality research issues, such as radon, formaldehyde, and allergens and pathogens (which include molds). The work groups, which are no longer active, were to coordinate research activities in these areas and identify future indoor air quality research. EPA used the committee to coordinate air quality research and assist in implementing the indoor air quality research and development program established by Congress in 1986. For example, in 1989 and 1999, EPA used the committee to help it develop two reports that identified the individual research activities on indoor air quality that federal agencies were conducting. EPA has taken the lead in directing committee activities in the past, such as chairing meetings, and this role continues today. The Consumer Product Safety Commission, EPA, FEMA, HHS, and HUD guidance documents we reviewed identify health effects associated with indoor mold in a residential setting but sometimes omit less common but serious health effects. Most of the guidance documents recommend similar strategies for minimizing mold growth. While guidance documents that discuss mold mitigation offer consistent advice about detecting mold, some provide conflicting information about cleaning agents and the appropriate level of protective equipment individuals need when mitigating mold in their homes. A majority of the 32 documents we reviewed that provide guidance to the general public on the health effects of indoor mold in their homes—issued by the Consumer Product Safety Commission, EPA, FEMA, HHS, and HUD—identify asthma and upper respiratory tract symptoms as potential health effects. In addition, many of these federal guidance documents cite unspecified allergic symptoms and skin symptoms, such as dermatitis, rashes, and hives. The six adverse health effects the Institute of Medicine found to be associated with indoor mold in 2004 are included in the 32 guidance documents to varying extents. However, all six adverse health effects are included in only two guidance documents, although a majority of the guidance was issued after the publication of the 2004 Institute of Medicine report. Further, only a few of the 32 guidance documents discuss adverse health effects associated with mold that are less common but serious. Such health effects include opportunistic infections or fungal colonization in immune-compromised individuals and hypersensitivity pneumonitis, a relatively rare allergic reaction in susceptible persons characterized by fever, chills, dry cough, and a flulike feeling that can, in its chronic form, result in permanent lung damage. Because these less common but potentially serious adverse health effects are infrequently cited in the guidance documents, some individuals consulting these guidance documents may not take appropriate precautions when they are exposed to indoor mold. Table 1 identifies the potential adverse health effects cited in 6 or more of the 32 guidance documents we reviewed. (App. V provides a list of the guidance documents we reviewed and information on how to access them.) Moreover, most of the federal guidance documents we reviewed describe populations that may be particularly sensitive to indoor mold. However, few of the documents identify all of the populations that should take extra precautions to limit exposure to indoor mold. According to an HHS guidance document, these populations include the immune-compromised as well as those with asthma, chronic lung diseases, and allergies to mold. Immune-compromised individuals include organ transplant recipients, HIV patients, individuals with leukemia or lymphoma, and those undergoing cancer chemotherapy or other immunosuppressant drug therapies. HHS also recommends “due caution” for children, pregnant women, and the elderly who are exposed to indoor mold. Although some of the guidance documents identify several of these populations, some list only one or two. As a result, individuals consulting these guidance documents, especially those who are particularly vulnerable to mold exposure, may not be fully apprised of the risks associated with such exposure. We recognize that the guidance documents we reviewed may address health effects and particularly sensitive populations in varying levels of detail because of differences in purpose and intended audience. For example, several EPA guidance documents targeted toward particular populations, such as teens, the elderly, and people with low literacy levels, are limited in their scope and level of detail. In contrast, HHS’s document, Mold Prevention Strategies and Possible Health Effects in the Aftermath of Hurricanes and Major Floods, which is targeted to the general public as well as to public health officials, includes a detailed discussion of numerous potential health effects that may result from exposure to indoor mold. Although not all guidance documents need to provide a comprehensive list of all of the potential health effects of exposure to indoor mold, the information provided should be sufficient to alert the public about potential adverse health effects of exposure to indoor mold, highlight specific populations that are particularly vulnerable to such exposure, and not conflict among documents. Most of the 32 guidance documents issued by the Consumer Product Safety Commission, EPA, FEMA, HHS, and HUD that we reviewed describe how to minimize indoor mold growth in the home. These documents generally advise that residents reduce indoor moisture or humidity levels, and their recommendations for doing so are generally consistent. A majority of these guidance documents recommend that residents keep areas dry and address moisture sources, such as leaks or spills. Some of the guidance documents also recommend managing specific sources of moisture or humidity by, for example, preventing water from entering the house, ventilating and cleaning kitchens and baths to reduce moisture buildup, and repairing and insulating pipes. In addition, a majority of the documents recommend promptly drying wet items. Nearly half of the documents that provide more specific recommendations note that porous items, such as carpets, must be dried within 48 hours to avoid the growth of mold and say that if more than 48 hours have elapsed, these items should be discarded. A number of the guidance documents that address strategies to minimize indoor mold growth also advise residents to maintain indoor relative humidity within specific ranges because high relative humidity can lead to water condensation on indoor surfaces, such as walls and windows, which can support mold growth. However, we note that the humidity ranges specified by the guidance documents vary. For example, while all the guidance documents that address relative humidity recommend maintaining it at 60 percent or below, one FEMA document recommends maintaining the relative humidity below 40 percent, and three guidance documents issued by HHS recommend a relative humidity range between 40 percent and 60 percent. Such differences in guidance to the public could cause some confusion about this aspect of minimizing indoor mold growth. A majority of the guidance documents we reviewed provide information to the public about mitigating exposure to indoor mold. Many of the documents agree that if mold can be either seen or smelled, it should be removed. Recommendations on detecting mold are broadly consistent with information in a 2001 EPA report on mold mitigation in schools and commercial buildings, which is cited by a number of the guidance documents as a resource for mitigation of residential mold growth. Further, the eight guidance documents that discuss sampling or testing to measure the quantity or type of mold in the indoor environment advise against it in most circumstances because the results of such testing may not be useful. For example, one of these documents explains that no standardized method exists either to measure the magnitude of exposure to mold or to relate a particular level of exposure to adverse health effects. Another guidance document notes that it is generally not necessary to determine the species of mold present. Finally, many of the guidance documents that discuss mitigation note that if the mold is extensive (for example, if it covers more than 25 square feet) or if it is found in the heating or air conditioning systems, residents should consult further guidance, such as EPA’s Mold Remediation in Schools and Commercial Buildings, or hire a professional contractor. While a majority of the guidance documents we reviewed discuss how to remove mold once a problem has been identified, there is some inconsistency about which cleaning agents to use. For example, two guidance documents recommend using detergent to clean mold. On the other hand, HHS’s Mold Prevention Strategies and Possible Health Effects in the Aftermath of Hurricanes and Major Floods advises that bleach may be warranted if the mold growth is due to floodwater, which can be contaminated. Another guidance document, issued by EPA, also advises that bleach be used when individuals who are particularly susceptible to adverse health effects from mold, such as those who are immune- compromised, are exposed to indoor mold. In contrast, six of the guidance documents we reviewed, including several of the HHS documents, recommend the use of bleach irrespective of certain populations or whether the mold growth is due to flooding. According to EPA’s 2001 report on mold mitigation, mold growing on hard (nonporous) surfaces should be scrubbed with water and detergent and then vacuumed. This report recommends using bleach only in limited circumstances—such as when immune-compromised individuals are present—because bleach, a biocide, is toxic to humans. These differences among guidance documents could lead to confusion among the general public about the safest and most effective way to remove mold. For example, if bleach is not necessary in most instances, using it unnecessarily could lead to avoidable problems, since bleach itself is a hazardous substance that can generate toxic fumes if it is mixed with ammonia-based cleaners. In addition, many of the guidance documents we reviewed discuss using personal protective equipment while removing mold but, in some cases, recommend different levels of protection for the general public as well as for certain populations that may be more sensitive to mold exposure. For example, as figure 2 shows, the guidance documents provide inconsistent recommendations for the general public about wearing respiratory protection, eye protection, and skin (dermal) protection (such as long- sleeved shirts and long pants) for cleanups of limited mold contamination. In addition, although 26 guidance documents caution that certain populations may be more sensitive to mold, only 2 of them, issued by HHS in 2005 and 2006, provide specific recommendations about the varying levels of personal protection that such populations should use under various circumstances. The HHS documents state that, when inspecting or assessing damage, individuals with certain lung diseases should wear respirators, while healthy individuals need no special protection for these tasks. However, these documents warn that individuals with “immunosuppression,” such as those undergoing cancer treatment or those who have leukemia or lymphoma, should wear a respirator, gloves, and safety goggles when inspecting or assessing damage. Further, those with “profound immunosuppression”—such as those with HIV infection— should avoid all exposure to mold. Guidance documents also provide inconsistent information about the types of respiratory protection to use when cleaning up mold. Of the 15 guidance documents that recommend the use of respiratory protection during cleanup, 6 list items such as dust masks, which do not protect against mold because it can pass through them. Nine of the documents suggest “N-95 respirators,” which filter 95 percent of airborne particles and can protect against inhaling mold. Moreover, only 3 of the guidance documents recommending the use of N-95 respirators discuss the need for proper fit—which could impact their effectiveness, according to the HHS’s NIOSH, the federal agency that approves these respirators. Furthermore, only 1 guidance document, issued by HHS, warns that respirator use may not be appropriate if an individual has a pre-existing medical condition that makes it difficult to breathe while wearing a respirator. A number of agency officials said they revisit the content of their guidance documents following significant new scientific discoveries or in response to events such as major flooding or hurricanes. We note that in the past few years, important updated information on the health effects of exposure to indoor mold and ways to protect against unnecessary exposure has been provided in three documents: the Institute of Medicine’s 2004 report and two HHS guidance documents on mold issued in 2005 and 2006 in the aftermath of the hurricanes and major floods on the Gulf Coast. Nevertheless, some of the guidance documents we reviewed do not yet reflect important updated information that these publications provide. Overall, despite the useful information provided in the federal guidance we reviewed, some omissions and inconsistencies could cause some individuals to be exposed to indoor mold unnecessarily. While the current research activities on indoor mold conducted or sponsored by EPA, HHS, and HUD address identified health-related research gaps to varying degrees, these activities are largely uncoordinated within and across agencies, and many are generated by independent researchers rather than by agency solicitations for specific research. This limited coordination contributes to the lack of standardized, quantitative methods for measuring exposure to mold that has impeded the advancement of knowledge about health effects and may result in unnecessary duplication of research efforts. Without more systematic coordination of planned and ongoing research activities, future research may not be prioritized to best fill data gaps or be of sufficient quality and consistency to more definitively support conclusions about any associations to indoor mold and adverse health effects. Specifically, the Institute of Medicine was unable to associate a number of adverse health effects with exposure to mold because the available studies were of “insufficient quality, consistency, or statistical power to permit a conclusion regarding the presence of an association.” An existing interagency committee—the Federal Interagency Committee on Indoor Air Quality—could provide an effective vehicle for enhancing the coordination of research activities. As the executive secretary and co- chair, EPA guides the activities of this committee, which was established in response to congressional direction to, among other things, coordinate federal indoor air quality research and foster information sharing among, for example, federal agencies and the public. While the committee provides a forum for informal information sharing, it has not been used in recent years to support systematic coordination of federal research priorities or agendas for indoor air research. Since the Federal Interagency Committee on Indoor Air Quality was established in the 1980s, significant advances in communications technologies, such as the Internet, have transformed the exchange of information—for example, through Web pages and hyperlinks to documents and Web sites. These communications advances can facilitate the coordination among federal agencies, state and local governments, the private sector, the research community, and the general public that the Federal Interagency Committee on Indoor Air Quality was established to accomplish. Overall, the federal guidance documents we reviewed that provide information to the general public about the health effects of exposure to indoor mold, ways to minimize mold growth, and safe and effective methods for cleaning up provide generally useful information. However, some documents do not sufficiently advise the general public about some potentially serious health effects, and others provide inconsistent information about cleaning agents and appropriate protective gear. Regarding protective gear, some documents do not provide information about how populations that are particularly vulnerable to adverse health effects should protect themselves. In fact, populations with certain immunosuppression conditions should avoid exposure to mold but many guidance documents do not state this. As a result, the public may not be sufficiently aware of the health risks they or their family members may face, and they may also be confused about how to approach cleaning up mold in their homes. We recommend that the Administrator, EPA, use the Federal Interagency Committee on Indoor Air Quality to accomplish the following two actions. Help articulate and guide research priorities on indoor mold across relevant federal agencies, coordinate information sharing on ongoing and planned research activities among agencies, and provide information to the public on ongoing research activities to better ensure that federal research on the health effects of exposure to indoor mold is effectively addressing research needs and efficiently using scarce federal resources. Help relevant agencies review their existing guidance to the public on indoor mold—considering the audience and purpose of the guidance documents—to better ensure that it sufficiently alerts the public, especially vulnerable populations, about the potential adverse health effects of exposure to indoor mold and educates them on how to minimize exposure in homes. The reviews should take into account the best available information and ensure that the guidance does not conflict among agencies. We provided the Consumer Product Safety Commission, EPA, FEMA, HHS, and HUD with a draft of this report and the related supplement (GAO-08-984SP) for the agencies’ review and comment. In its response, EPA generally agreed with our recommendations that it use the Federal Interagency Committee on Indoor Air Quality to, among other things, help articulate and guide research priorities on indoor mold across relevant federal agencies and help relevant agencies review their existing guidance to the public on indoor mold to better ensure that it sufficiently alerts the public about the potential adverse health effects of exposure to indoor mold and educates the public on how to minimize exposure in homes. In commenting on the draft report, HUD and the Consumer Product Safety Commission also generally supported our recommendations to EPA. FEMA did not provide comments on the report, and HHS’s comments did not address our recommendations to EPA. The Consumer Product Safety Commission, EPA, HHS, and HUD also provided technical comments on our report, and HHS provided a technical comment on the supplement; their comments were incorporated, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Acting Chairman, Consumer Product Safety Commission; Administrator, EPA; Administrator, FEMA; Secretary, HHS; Secretary, HUD; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. The objective of this review was to assess federal agencies’ activities to minimize and mitigate the health effects of exposure to indoor mold. Specifically, we examined (1) what recent reviews of the scientific literature have concluded about the health effects of exposure to indoor mold; (2) the extent to which federal research addresses data gaps related to the health effects of exposure to indoor mold; and (3) what guidance key federal agencies are providing to the public on the health risks of exposure to mold, and on minimizing and mitigating that exposure, and the extent to which the guidance is consistent. Our review focuses on the health effects and guidance to the general public related to indoor mold in homes and does not address occupational exposures or technical guidance documents targeted to specialized audiences, such as medical professionals and emergency response workers. To determine what recent reviews of the scientific literature have concluded about the health effects of exposure to indoor mold, we primarily relied on the findings in the National Academies’ Institute of Medicine comprehensive report issued in 2004, Damp Indoor Spaces and Health. To identify more recent reviews of the health effects of exposure to indoor mold, we conducted a literature search. We searched for reviews and meta-analyses, rather than individual studies, published in English in 2005, 2006, and 2007, primarily using PubMed, a bibliographic database service of the U.S. National Library of Medicine. We conducted 19 different searches of PubMed using combinations of the following search terms: mold, exposure, health, indoor, glucan, microbial volatile organic compounds, mycotoxins, ergosterol, hemolysins, fungal extracellular polysaccharides, fungal/hyphal fragments, allergens, stachybotrys, acute ideopathic pulmonary hemorrhage, acute pulmonary hemorrhage and infants, and hemosiderosis. As part of these searches, we used PubMed’s Clinical Queries option to find Systematic Reviews, which cover a broad set of articles that build consensus on biomedical topics. We also conducted a search for reviews and meta-analyses using the search strategy “mold AND (exposure OR indoor OR health)” in 15 other databases providing comprehensive worldwide coverage of scientific and technical journals on relevant topics. We reviewed the abstracts of all search results and obtained copies of the publications for which no abstracts were available, unless the available information indicated that the publication was unrelated to our review. We evaluated the relevance of the abstracts and publications and identified those that addressed the health effects of exposure to indoor mold and its constituents or products, excluding those that addressed dietary exposures, exposures in industrial or agricultural settings, publications focused on yeasts, case studies of mold in particular locations, and any publications that were clearly not meta-analyses or reviews of the scientific literature. Twenty of the reviews met our criteria (see app. II for a list of these reviews). To assess the credibility, reliability, and methodological soundness of these publications, a senior GAO analyst with a doctorate in epidemiology reviewed each of the publications and any additional methodological information obtained from the authors and considered such factors as the bibliographies of evidence cited, the journals in which the articles were published, and the extent to which they are primary authors of other relevant articles. We did not examine the references cited by these studies as part of our analysis. Some of the reviews may be based on primary sources (for example, epidemiologic studies), while others may also be based on sources that are themselves reviews of the scientific literature (for example, the 2004 Institute of Medicine report). We concluded that all 20 reviews were sufficiently reliable for the purposes of this report. We also used the 2004 Institute of Medicine report to help identify areas where additional research is needed to address scientific data gaps primarily related to the health effects of exposure to indoor mold other than asthma, as well as the institute’s 2000 report, Clearing the Air: Asthma and Indoor Air Exposures, which focused on gaps related to asthma. We conducted in-depth reviews of these reports, including their methodology and conclusions, and we summarized the research needs they identified related to the health effects of exposure to indoor mold. To obtain information on federal research related to the health effects of exposure to indoor mold, we conducted two surveys of officials at the Environmental Protection Agency (EPA), the Department of Health and Human Services (HHS), and the Department of Housing and Urban Development (HUD) from November 2007 to May 2008. We used one survey to (1) identify research activities related to the health effects of indoor mold ongoing as of October 1, 2007, and (2) determine the extent to which these research activities address the 15 data gaps identified in the 2000 and 2004 Institute of Medicine reports related to the health effects of exposure to indoor mold. Respondents completed a survey for each individual research activity ongoing as of October 1, 2007. We also used this survey to identify the extent to which these activities were coordinated both within and across agencies. We conducted a second survey of these agencies to collect basic information on their mold-related research activities completed from January 1, 2005, to September 30, 2007. Overall, we received information on 107 research activities from 37 EPA, HHS, and HUD officials. We received responses to our surveys from all relevant officials and agency entities. Summaries of the research activities conducted or sponsored by EPA, HHS, and HUD are provided in a supplement to this report (see GAO-08-984SP). We surveyed officials at EPA, HHS, and HUD because of these agencies’ past and current participation in mold research. Specifically, we identified these agencies based on federal reports to Congress summarizing efforts to improve indoor air quality and interviews with federal officials involved in this research, among other things. We took a number of steps to ensure that our surveys would obtain reliable information from the appropriate agencies and officials regarding federal research activities on the health effects of exposure to indoor mold. For example, to ensure that we sent surveys to all agency officials involved in indoor mold-related research activities, we provided audit liaisons and agency respondents with a list of the units and officials in their agencies that we had identified as being relevant. We also asked audit liaisons to verify that we had not omitted any relevant units within their agencies and confirm whether other agency officials identified during our interviews as potentially involved in indoor mold-related research activities were involved with relevant activities. When an audit liaison identified a new agency respondent involved in indoor mold-related research activities, the individual was provided with copies of our surveys. (See app. IV for information on the units we contacted at these agencies.) We pretested our survey questions by sending them to two researchers from EPA and the National Institutes of Health (NIH) and incorporating their feedback into the final surveys. To increase the response rate, we followed up with agency officials to obtain responses from all relevant parties. We also performed a series of reliability tests on the data we received, including (1) examining agency submissions to exclude any that were either duplicates or did not meet our criteria and (2) checking for missing data or discrepancies. When we identified discrepancies or inconsistencies in the data, we followed up with relevant agency officials. In addition, we interviewed EPA, HHS, and HUD officials to determine the extent to which they coordinate their research projects and their priorities for mold-related research. To assess the extent to which the Federal Interagency Committee on Indoor Air Quality has been used to coordinate federal research activities related to the health effects of exposure to indoor mold, we reviewed relevant reports and the minutes of committee meetings dating from February 2005 to February 2008, and we interviewed EPA and other officials involved with the committee. To determine what guidance key federal agencies are providing to the general public on the health risks of exposure to indoor mold, and on minimizing mold growth and mitigating exposure to mold in their homes, and the extent to which the guidance is consistent, we focused our review on the five federal agencies that provide information to the general public on health risks and minimizing and mitigating exposure to contaminants, including mold. The guidance we reviewed includes fact sheets, brochures, booklets, and Web pages. Specifically, we reviewed guidance on the health effects of mold in a residential setting issued by the Consumer Product Safety Commission, EPA, HUD, HHS, and the Federal Emergency Management Agency (FEMA) that was identified primarily through online searches of federal Web sites and interviews with relevant program officials. We selected guidance to the general public that addresses health effects associated with indoor mold using a nonprobability sample. We did not include technical documents targeted to specialized audiences, such as medical professionals or emergency response workers. Of the 78 guidance documents that met our initial criteria, we selected 32 for detailed review on the basis of their content, purpose, and the extent to which they specifically addressed indoor mold. (In some cases, the documents broadly address indoor air contaminants but only briefly mention mold.) Specifically, of the 34 mold-related guidance documents FEMA issued to the general public responding to specific disasters since 2004, we selected 8 for our review; we excluded the other 26 because they contain essentially similar information. Further, we included in our review the 8 guidance documents issued by the Consumer Product Safety Commission and HUD that address health effects associated with indoor mold; however, we excluded some guidance documents issued by EPA and HHS primarily because they were similar to, and thus duplicative of, other documents already included in our review. We provided agency officials with an opportunity to review our list of guidance documents and suggest additional documents for inclusion in our review. We added relevant documents, as suggested. (See app. V for the guidance documents included in our review.) Additionally, we interviewed officials from the five agencies issuing the guidance to determine their procedures for developing and issuing guidance documents. The guidance documents we analyzed are publicly available and can be accessed through the agencies’ Web sites. We conducted this performance audit from January 2007 to September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following list of recent reviews of the health effects of mold includes two Institute of Medicine reports and 20 other reviews. Borchers A.T., Chang C., Keen C.L., and M.E. Gershwin. “Airborne Environmental Injuries and Human Health.” Clinical Reviews in Allergy and Immunology, vol. 31, no. 1 (2006): 1-102. Bush R.K., Portnoy J.M., Saxon A., Terr A.I., and R.A. Wood. “The medical effects of mold exposure.” The Journal of Allergy and Clinical Immunology, vol. 117, no. 2 (2006): 326-33. Douwes J. “(1—>3)-Beta-D-glucans and respiratory health: a review of the scientific evidence.” Indoor Air, vol. 15, no. 3 (2005): 160-9. Etzel R.A. “Indoor and outdoor air pollution: Tobacco smoke, moulds and diseases in infants and children.” International Journal of Hygiene and Environmental Health, vol. 210, no. 5 (2007): 611-6. Fisk, W.J., Lei-Gomez Q., and M.J. Mendell. “Meta-analyses of the associations of respiratory health effects with dampness and mold in homes.” Indoor Air, vol. 17, no. 4 (2007): 284-96. Gray, M. “Molds and Mycotoxins: Beyond Allergies and Asthma.” Alternative Therapies in Health and Medicine, vol. 13, no. 2 (2007): S146- 52. Green B.J., Tovey E.R., Sercombe J.K., Blachere F.M., Beezhold D.H., and D. Schmechel. “Airborne fungal fragments and allergenicity.” Medical Mycology, vol. 44, no. S1 (2006): S245-55. Habiba A. “Acute idiopathic pulmonary haemorrhage in infancy: Case report and review of the literature.” Journal of Paediatrics and Child Health, vol. 41, no. 9-10 (2005): 532-3. Hope, A.P., and R.A. Simon. “Excess dampness and mold growth in homes: An evidence-based review of the aeroirritant effect and its potential causes.” Allergy and Asthma Proceedings, vol. 28, no. 3 (2007): 262-70. Institute of Medicine, Clearing the Air: Asthma and Indoor Air Exposures. Washington, D.C.: National Academy Press, 2000. Institute of Medicine, Damp Indoor Spaces and Health. Washington, D.C.: The National Academies Press, 2004. Jarvis B.B., and J.D. Miller. “Mycotoxins as harmful indoor air contaminants.” Applied Microbiology and Biotechnology, vol. 66, no. 4 (2005): 367-72. Khalili B., Montanaro M.T., and E.J. Bardana Jr. “Inhalational mold toxicity: fact or fiction? a clinical review of 50 cases.” Annals of Allergy, Asthma & Immunology, vol. 95, no. 3 (2005): 239-46. Lai K.-M. “Hazard Identification, Dose-Response and Environmental Characteristics of Stachybotryotoxins and Other Health-Related Products from Stachybotrys.” Environmental Technology, vol. 27, no. 3 (2006): 329- 35. Laumbach R.J., and H.M. Kipen. “Bioaerosols and sick building syndrome: particles, inflammation, and allergy.” Current Opinion in Allergy and Clinical Immunology, vol. 5, no. 2 (2005): 135-9. Mazur L.J., and J. Kim; Committee on Environmental Health, American Academy of Pediatrics. “Spectrum of Noninfectious Health Effects From Molds.” Pediatrics, vol. 118, no. 6 (2006): e1909-26. Nuesslein T.G., Teig N., and C.H. Rieger. “Pulmonary haemosiderosis in infants and children.” Paediatric Respiratory Reviews, vol. 7, no. 1 (2006): 45-8. Phipatanakul W. “Environmental Factors and Childhood Asthma.” Pediatric Annals, vol. 35, no. 9 (2006): 646-56. Seltzer J.M., and M.J. Fedoruk. “Health Effects of Mold in Children.” Pediatric Clinics of North America, vol. 54, no. 2 (2007): 309-33, viii-ix. Susarla S.C., and L.L. Fan. “Diffuse alveolar hemorrhage syndromes in children.” Current Opinion in Pediatrics, vol. 19, no. 3 (2007): 314-20. Portnoy J.M., Kwak K., Dowling P., VanOsdol T., and C. Barnes. “Health effects of indoor fungi.” Annals of Allergy, Asthma & Immunology, vol. 94, no. 3 (2005): 313-20. Richardson G., Eick S., and R. Jones. “How is the indoor environment related to asthma?: literature review.” Journal of Advanced Nursing, vol. 52, no. 3 (2005): 328-39. Identify environmental factor that either lead to the development of athma or pre- cipitate ymptom in ubject who already have athma uin ood meaure of funal expoure. Improve amplin and expoure assssment method for mold and it component (uch a reearch that will help lead to tandardization of protocol for ample col- lection, tranport, and analy; or develop or improve method of peronal airborne expoure meaurement, DNA-baed technoloy, or assay for bioaerool, etc.) Determine the association of dampness problem with athma development and ymptom by reearchin the cauative aent (e.., mold, dut mite alleren) and documentin the relationhip between dampness and alleren expoure. Identify funal alleren or pattern of cross-reactivity amon funal alleren. Collect and analyze data on the interaction amon multiple indoor aent (uch a mold, peticide, and volatile oranic compound) and environmental factor (uch a humidity, temperature, and ventilation). Develop information on the possible advere health effect of dampness-related emission of mold pore from buildin material and furnihings. Determine how to meaure the effectiveness and health effect of mold remediation effort. Better characterize the possible influence of the duration of moiture damae on health. Develop tandardized metric and protocol to assss the nature, everity, and ex- tent of dampness and effectiveness of pecific meaure for dampness reduction. Advance the undertandin of pecific bioaerool in relation to athma by tudyin the epidemioloy of buildin-related athma in problem buildings where there are excess chet complaint amon occupant in comparion to buildings where there are not complaint; or provide expoure-repontudie of many buildin environment and population. Assss the effect of houin intervention (uch a prevention or remediation of moiture problem, etc.) on dampness and advere health effect, includin the extent to which intervention are associated with a decreae in the occurrence of advere health effect, and identify effective and efficient intervention trateie. Better characterize the effectiveness of variou mean of protection ued durin mold remediation activitie. Determine the effect of human expoure to Stachybotrys chartarum in indoor environment. Determine, for mycotoxin, the doe required to caue advere health effect in human via inhalation and dermal expoure; technique for detectin and quantify- in mycotoxin in tissue; or the effect of lon-term (chronic) expoure to mycotoxin via inhalation. Reearch the relationhip between mold and dampness and acute pulmonary hemorrhae or hemoidero in infant. In fact, many of the activities are reported to address three or more gaps. Summaries of the 65 research activities conducted or sponsored by EPA, HHS, and HUD are provided in a supplement to this report (GAO-0-94SP). Agency officials reported that eight federal mold research activities currently being conducted do not directly address any of the data gaps identified by the 2000 and 2004 Institute of Medicine reports. Some of these studies were directed at medical treatments and others were focused on other potential causes of asthma. For example, one study is evaluating whether chronic rhinosinusitis is induced by an abnormal immune response to mold and therefore whether an anti-fungal agent will be an effective treatment of the disease. Another study is developing and validating DNA-based methods for identification and fingerprinting medically important fungi. Several of these research activities focused on asthma. For example, two studies, one of children in El Paso and another of children in Detroit, are primarily focused on the role of residential proximity to roadways in the development of childhood asthma but also collected data on indoor exposures, including home dampness and the presence of visible molds. Another study being conducted is designed to test the hypothesis that asthma control in low income, urban adolescents and young adults can be improved with the addition of exhaled nitric oxide as a marker for treatment guidance to conventional asthma management guidelines; a secondary purpose of this study is to examine the role of allergy to molds in influencing the effectiveness of the asthma management plan. Asthma data gaps identified by the 2000 and 2004 Institute of Medicine reports. Measurement methods data gaps identified by the 2000 and 2004 Institute of Medicine reports. We obtained information on federal research related to the health effects of exposure to indoor mold from three key agencies—EPA, HHS, and HUD. We obtained and analyzed information and interviewed program managers and other officials responsible for research at these agencies. Following are the offices, centers, and other program units we surveyed regarding their mold-related research. Consumer Product Safety Commission and the American Lung Association, Biological Pollutants in Your Home (Bethesda, Md., 1990). http://www.cpsc.gov/cpscpub/pubs/425.html (accessed May 8, 2008). Consumer Product Safety Commission and the Environmental Protection Agency, The Inside Story: A Guide to Indoor Air Quality (Washington, D.C., 1995). http://www.epa.gov/iaq/pubs/insidest.html (accessed May 8, 2008). Environmental Protection Agency, Addressing Indoor Environmental Concerns During Remodeling (Washington, D.C., 2007). http://www.epa.gov/iaq/homes/hip-concerns.html (accessed May 9, 2008). Environmental Protection Agency, Age Healthier Breathe Easier (Washington, D.C., 2004). http://www.epa.gov/aging/resources/factsheets/ahbe_english_2004_0330.p df (accessed May 9, 2008). Environmental Protection Agency, A Brief Guide to Mold, Moisture, and Your Home (Washington, D.C., 2002). http://www.epa.gov/mold/moldguide.html (accessed May 9, 2008). Environmental Protection Agency, Cleaning Up After a Flood: Addressing Mold Problems (Washington, D.C., 2005). http://www.epa.gov/katrina/outreach/mold.pdf (accessed May 9, 2008). Environmental Protection Agency, Controlling Moisture (Washington, D.C., 2007). http://www.epa.gov/iaq/homes/hip-moisture.html (accessed May 9, 2008). Environmental Protection Agency, Live, Learn, Play—Tune in to Your Health and Environment (Washington, D.C., 2004). http://yosemite.epa.gov/ochp/ochpweb.nsf/content/dirt.htm (accessed May 9, 2008). Environmental Protection Agency, Flood Cleanup—Avoiding Indoor Air Quality Problems (Fact Sheet) (Washington, D.C., 2003). http://www.epa.gov/mold/pdfs/floods.pdf (accessed May 9, 2008). Environmental Protection Agency, Flood Cleanup and the Air in your Home (Washington, D.C., 2006). http://www.epa.gov/mold/flood/index.html (accessed May 9, 2008). Environmental Protection Agency, What are ten things I need to know about mold? (Washington, D.C., 2008). http://iaq.custhelp.com/cgi- bin/iaq.cfg/php/enduser/std_alp.php (accessed May 9, 2008). Environmental Protection Agency, What You Can Do to Protect Children from Environmental Risks (Washington, D.C., 2002). http://yosemite.epa.gov/ochp/ochpweb.nsf/content/tips.htm (accessed May 9, 2008). Environmental Protection Agency; Department of Agriculture, Cooperative State Research, Education, and Extension Service; Department of Housing and Urban Development; Montana State University Extension Service; and Alabama Cooperative Extension System at Auburn University, Healthy Indoor Air for America’s Homes (Bozeman, Mont., 2007). http://www.montana.edu/wwwcxair/ (accessed May 9, 2008). Department of Health and Human Services, Centers for Disease Control and Prevention, Molds in the Environment (Atlanta, 2005). http://www.cdc.gov/mold/faqs.htm (accessed May 9, 2008). Department of Health and Human Services, Centers for Disease Control and Prevention, Facts About Mold And Dampness (Atlanta, 2005). http://www.cdc.gov/mold/dampness_facts.htm (accessed May 9, 2008). Department of Health and Human Services, Centers for Disease Control and Prevention, Mold Questions and Answers: Questions and Answers on Stachybotrys chartarum and other molds (Atlanta, 2004). http://www.cdc.gov/mold/stachy.htm (accessed May 9, 2008). Department of Health and Human Services, Centers for Disease Control and Prevention, Protect Yourself from Mold (Atlanta, 2006). http://www.bt.cdc.gov/disasters/mold/protect.asp (accessed May 9, 2008). Department of Health and Human Services, Centers for Disease Control and Prevention, Mold Prevention Strategies and Possible Health Effects in the Aftermath of Hurricanes and Major Floods (Atlanta, 2006). http://www.cdc.gov/mmwr/preview/mmwrhtml/rr5508a1.htm (accessed May 9, 2008). Department of Health and Human Services, Centers for Disease Control and Prevention, Population-Specific Recommendations for Protection From Exposure to Mold in Buildings Flooded After Hurricanes Katrina and Rita, by Specific Activity and Risk Factor (Atlanta, 2005). http://www.bt.cdc.gov/disasters/mold/report/pdf/2005_moldtable5.pdf (accessed May 9, 2008). Department of Homeland Security, Federal Emergency Management Agency, Dealing with Mold and Mildew in Your Flood Damaged Home (Washington, D.C., 2005). http://www.fema.gov/pdf/rebuild/recover/fema_mold_brochure_english.pd f (accessed May 9, 2008). Department of Homeland Security, Federal Emergency Management Agency, Got Mold? Clean, Disinfect and Dry (Wichita, Kans., 2007). http://www.fema.gov/news/newsrelease.fema?id=37791 (accessed May 9, 2008). Department of Homeland Security, Federal Emergency Management Agency, Mold Can Be A Danger When Evacuees Return Home (Baton Rouge, La., 2005). http://www.fema.gov/news/newsrelease.fema?id=19302 (accessed May 9, 2008). Department of Homeland Security, Federal Emergency Management Agency, Mold—A Growing Threat (Andover, Mass., 2006). http://www.fema.gov/news/newsrelease.fema?id=26898 (accessed May 9, 2008). Department of Homeland Security, Federal Emergency Management Agency, Mold: A Health Hazard (Montgomery, Ala., 2005). http://www.fema.gov/news/newsrelease.fema?id=20379 (accessed May 9, 2008). Department of Homeland Security, Federal Emergency Management Agency, Mold: Potential Threat to Health and Homes (Austin, Tex., 2005). http://www.fema.gov/news/newsrelease.fema?id=19767 (accessed May 9, 2008). Department of Homeland Security, Federal Emergency Management Agency, Prompt Cleanup Of Mold And Mildew Is Essential (Newington, N.H., 2006). http://www.fema.gov/news/newsrelease.fema?id=27186 (accessed July 1, 2008). Department of Homeland Security, Federal Emergency Management Agency, Water-Damaged Homes May Harbor Mold Problem (Washington, D.C., 2007). http://www.fema.gov/news/newsrelease.fema?id=36536 (accessed May 9, 2008). Department of Housing and Urban Development, About Mold and Moisture (Washington, D.C., 2007). http://www.hud.gov/offices/lead/healthyhomes/mold.cfm (accessed May 9, 2008). Department of Housing and Urban Development, Healthy Homes Program (Washington, D.C., 2003). http://www.hud.gov/offices/lead/library/hhi/HH_Brochure_Revised.pdf (accessed May 9, 2008). Department of Housing and Urban Development, Mold and Moisture Prevention: A Guide for Residents in Indian Country (Washington, D.C., 2004). http://www.hud.gov/offices/pih/ih/codetalk/docs/moldprevention.pdf (accessed May 9, 2008). Department of Housing and Urban Development, Mold (Washington, D.C., 2005). http://www.hud.gov/offices/lead/library/hhi/Mold.pdf (accessed May 9, 2008). Department of Housing and Urban Development; Department of Agriculture, Cooperative State Research, Education, and Extension Service; and University of Wisconsin Healthy Homes Partnership, Help Yourself to a Healthy Home (Washington, D.C., 2002). http://www.hud.gov/offices/lead/library/hhi/HYHH_Booklet.pdf (accessed May 9, 2008). In addition to the contact named above, Christine Fishkin, Assistant Director; Krista Breen Anderson; Nancy Crothers; Benjamin Howe; Richard P. Johnson; Nico Sloss; and Ruth Solomon made key contributions to this report. Linda Choy; Michael Derr; Alice Feldesman; Terrance Horner; Armetha Liles; Luann Moy; and Anne Rhodes-Kline also made important contributions.
Recent research suggests that indoor mold poses a widespread and, for some people, serious health threat. Federal agencies engage in a number of activities to address this issue, including conducting or sponsoring research. For example, in 2004 the National Academies' Institute of Medicine issued a report requested by the Department of Health and Human Services (HHS) summarizing the scientific literature on mold, dampness, and human health. In addition, the Federal Interagency Committee on Indoor Air Quality supports the Environmental Protection Agency's (EPA) indoor air research program. With respect to the health effects of exposure to indoor mold, GAO was asked to report on (1) the conclusions of recent reviews of the scientific literature, (2) the extent to which federal research addresses data gaps, and (3) the guidance agencies are providing to the general public. GAO reviewed scientific literature on indoor mold's health effects, surveyed three agencies that conduct or sponsor indoor mold research, and analyzed guidance issued by five agencies. In general, the Institute of Medicine's 2004 report, and reviews of the scientific literature published from 2005 to 2007 that GAO examined, concluded that certain adverse health effects are more clearly associated with exposure to indoor mold than others. For example, the Institute of Medicine concluded that some respiratory effects, such as exacerbation of pre-existing asthma, are associated with exposure to indoor mold but that the available evidence was not sufficient to determine whether mold and a variety of other health effects, such as the development of asthma, cancer, and acute pulmonary hemorrhage in infants, are associated. While the reviews GAO examined generally agreed with these conclusions, a few judged the evidence for some health effects as somewhat stronger. For example, the American Academy of Pediatrics concluded in 2006 that a plausible link exists between acute pulmonary hemorrhage in infants and exposure to toxins that some molds produce. In addition, the 2004 Institute of Medicine report identified the need for additional research to address a number of data gaps related to the health effects of indoor mold. The 65 ongoing federal research activities on the health effects of exposure to indoor mold conducted or sponsored by EPA, HHS, and the Department of Housing and Urban Development (HUD) address to varying extents 15 gaps in scientific data reported by the Institute of Medicine. For example, many of the research activities address data gaps related to asthma and measurement methods, while other data gaps, such as those related to toxins produced by some molds, are being minimally addressed. Further, less than half of the ongoing mold-related research activities are coordinated either within or across agencies. This limited coordination is important in light of, among other things, the wide range of data gaps identified by the Institute of Medicine and limited federal resources. The Federal Interagency Committee on Indoor Air Quality could provide a structured mechanism for coordinating research activities on mold and other indoor air issues by, for example, serving as a forum for reviewing and prioritizing agencies' ongoing and planned research. However, it currently does not do so. Despite limitations of scientific evidence regarding a number of potential health effects of exposure to indoor mold, enough is known that federal agencies have issued guidance to the general public about health risks associated with exposure to indoor mold and how to minimize mold growth and mitigate exposure. For example, guidance issued by the Consumer Product Safety Commission, EPA, the Federal Emergency Management Agency, HHS, and HUD cites a variety of health effects of exposure to indoor mold but in some cases omits less common but serious effects. Moreover, while guidance on minimizing indoor mold growth is generally consistent, guidance on mitigating exposure to indoor mold is sometimes inconsistent about cleanup agents, protective clothing and equipment, and sensitive populations. As a result, the public may not be sufficiently advised of indoor mold's potential health risks.
Air cargo ranges in size from 1 pound to several tons, and in type from perishables to machinery, and can include items such as electronic equipment, automobile parts, clothing, medical supplies, fresh produce, and human remains. Cargo can be shipped in various forms, including large containers known as unit loading devices (ULD) that allow many packages to be consolidated into one container that can be loaded onto an aircraft, wooden crates, consolidated pallets, or individually wrapped/boxed pieces, known as loose or bulk cargo. Participants in the air cargo shipping process include shippers, such as individuals and manufacturers; freight forwarders; air cargo handling agents, who process and load cargo onto aircraft on behalf of air carriers; and air carriers that load and transport cargo. A shipper may take or send its packages to a freight forwarder who in turn consolidates cargo from many shippers onto a master air waybill—a manifest of the consolidated shipment—and delivers it to air carriers for transport. A shipper may also send freight by directly packaging and delivering it to an air carrier’s ticket counter or sorting center, where the air carrier or a cargo handling agent will sort and load cargo onto the aircraft. According to TSA, the mission of its air cargo security program is to secure the air cargo transportation system while not unduly impeding the flow of commerce. TSA’s responsibilities for securing air cargo include, among other things, establishing security requirements governing domestic and foreign passenger air carriers that transport cargo and domestic freight forwarders. TSA is also responsible for overseeing the implementation of air cargo security requirements by air carriers and freight forwarders through compliance inspections, and, in coordination with DHS’s Directorate for Science and Technology (S&T Directorate), for conducting research and development of air cargo security technologies. Of the nearly $4.8 billion appropriated to TSA for aviation security in fiscal year 2009, approximately $123 million is directed for air cargo security activities. TSA was further directed to use $18 million of this amount to expand technology pilots and for auditing participants in the CCSP. Air carriers and freight forwarders are responsible for implementing TSA security requirements. To do this, they utilize TSA-approved security programs that describe the security policies, procedures, and systems they will implement and maintain to comply with TSA security requirements. These requirements include measures related to the acceptance, handling, and screening of cargo; training of employees in security and cargo screening procedures; testing for employee proficiency in cargo screening; and access to cargo areas and aircraft. Air carriers and freight forwarders must also abide by security requirements imposed by TSA through security directives and amendments to security programs. The 9/11 Commission Act defines screening for purposes of the air cargo screening mandate as a physical examination or nonintrusive methods of assessing whether cargo poses a threat to transportation security. The act specifies that screening methods include X-ray systems, explosives detection systems (EDS), explosives trace detection (ETD), explosives detection canine teams certified by TSA, physical search together with manifest verification, and any additional methods approved by the TSA Administrator. For example, TSA also recognizes the use of decompression chambers as an approved screening method. However, solely performing a review of information about the contents of cargo or verifying the identity of the cargo’s shipper does not constitute screening for purposes of satisfying the mandate. TSA has taken several key steps to meet the 9/11 Commission Act air cargo screening mandate as it applies to domestic cargo. TSA’s approach involves multiple air cargo industry stakeholders sharing screening responsibilities across the air cargo supply chain. TSA, air carriers, freight forwarders, shippers, and other entities each play an important role in the screening of cargo, although TSA has determined that the ultimate responsibility for ensuring that screening takes place at mandated levels lies with the air carriers. According to TSA officials, this decentralized approach is expected to minimize carrier delays, cargo backlogs, and potential increases in cargo transit time, which would likely result if screening were conducted primarily by air carriers at the airport. Moreover, because much cargo is currently delivered to air carriers in a consolidated form, the requirement to screen individual pieces of cargo will necessitate screening earlier in the air cargo supply chain—before cargo is consolidated. The specific steps that TSA has taken to address the air cargo screening mandate are discussed below. TSA revised air carrier security programs. Effective October 1, 2008, several months prior to the first mandated deadline, TSA established a new requirement for 100 percent screening of nonexempt cargo transported on narrow-body passenger aircraft. Narrow-body flights transport about 26 percent of all cargo on domestic passenger flights. According to TSA officials, air carriers reported that they are currently meeting this requirement. Effective February 1, 2009, TSA also required air carriers to ensure the screening of 50 percent of all nonexempt air cargo transported on all passenger aircraft. Although screening may be conducted by various entities, each air carrier must ensure that the screening requirements are fulfilled. Furthermore, effective February 2009, TSA revised or eliminated most of its screening exemptions for domestic cargo. As a result, most domestic cargo is now subject to TSA screening requirements. TSA created the Certified Cargo Screening Program (CCSP). TSA also created a program, known as the CCSP, to allow screening to take place earlier in the shipping process and at various points in the air cargo supply chain. In this program, air cargo industry stakeholders—such as freight forwarders and shippers—voluntarily apply to become Certified Cargo Screening Facilities (CCSF). This program allows cargo to be screened before it is consolidated and transported to the airport, which helps address concerns about the time-intensive process of breaking down consolidated cargo at airports for screening purposes. TSA plans to inspect the CCSFs in order to ensure they are screening cargo as required. TSA initiated the CCSP at 18 major airports that, according to TSA officials, account for 65 percent of domestic cargo on passenger aircraft. TSA expects to expand the CCSP nationwide at a date yet to be determined. CCSFs in the program were required to begin screening cargo as of February 1, 2009. While participation in the CCSP is voluntary, once an entity is certified by TSA to participate it must adhere to TSA screening and security requirements and be subject to annual inspections by TSIs. To become certified and to maintain certification, TSA requires each CCSF to demonstrate compliance with increased security standards to include facility, personnel, procedural, perimeter, and information technology security. As part of the program, and using TSA-approved screening methods, freight forwarders must screen 50 percent of cargo being delivered to wide-body passenger aircraft and 100 percent of cargo being delivered to narrow-body passenger aircraft, while shippers must screen 100 percent of all cargo being delivered to any passenger aircraft. Each CCSF must deliver the screened cargo to air carriers while maintaining a secure chain of custody to prevent tampering with the cargo after it is screened. TSA conducted outreach efforts to air cargo industry stakeholders. In January 2008, TSA initiated its outreach phase of the CCSP in three cities and subsequently expanded its outreach to freight forwarders and other air cargo industry stakeholders in the 18 major airports. TSA established a team of nine TSA field staff to conduct outreach, educate potential CCSP applicants on the program requirements, and validate CCSFs. According to TSA officials, in February 2009, the agency also began using its cargo TSIs in the field to conduct outreach. In our preliminary discussions with several freight forwarders and shippers, industry stakeholders reported that TSA staff have been responsive and helpful in answering questions about the program and providing information on CCSP requirements. TSA established the Air Cargo Screening Technology Pilot and is conducting additional technology pilots. To operationally test ETD and X-ray technology among CCSFs, TSA created the Air Cargo Screening Technology Pilot in January 2008, and selected some of the largest freight forwarders to use the technologies and report on their experiences. TSA’s objectives for the pilot are to determine CCSFs’ ability to screen high volumes of cargo, test chain of custody procedures, and measure the effectiveness of screening technology on various commodity classes. TSA will provide each CCSF participating in the pilot with up to $375,000 for purchasing technology. As of February 26, 2009, 12 freight forwarders in 48 locations are participating in the pilot. The screening they perform as part of the operational testing also counts toward meeting the air cargo screening mandate. TSA expanded its explosives detection canine program. To assist air carriers in screening consolidated pallets and unit loading devices, TSA is taking steps to expand the use of TSA-certified explosives detection canine teams. TSA has 37 canine teams dedicated to air cargo screening— operating in 20 major airports—and is in the process of adding 48 additional dedicated canine teams. TSA is working with the air carriers to identify their peak cargo delivery times, during which canines would be most helpful for screening. In addition, we reported in October 2005 and April 2007 that TSA, working with DHS’s S&T Directorate, was developing and pilot testing a number of technologies to screen and secure air cargo with minimal effect on the flow of commerce. These pilot programs seek to enhance the security of cargo by improving the effectiveness of screening for explosives through increased detection rates and reduced false alarm rates. A description of several of these pilot programs and their status is included in table 1. TSA estimates that it achieved the mandate for screening 50 percent of domestic cargo transported on passenger aircraft by February 2009, based on feedback from air cargo industry stakeholders responsible for conducting screening. However, TSA cannot yet verify that screening is being conducted at the mandated level. The agency is working to establish a system to collect data from screening entities to verify that requisite screening levels for domestic cargo are being met. Effective February 2009, TSA adjusted air carrier reporting requirements and added CCSF reporting requirements to include monthly screening reports on the number of shipments screened at 50 and 100 percent. According to TSA officials, air carriers will provide to TSA the first set of screening data by mid-March 2009. By April 2009, TSA officials expect to have processed and analyzed available screening data, which would allow the agency to determine whether the screening mandate has been met. Thus, while TSA asserts that it has met the mandated February 2009, 50 percent screening deadline, until the agency analyzes required screening data, TSA cannot verify that the mandated screening levels are being achieved. In addition, although TSA believes its current screening approach enables it to meet the statutory screening mandate as it applies to domestic cargo, this approach could result in variable percentages of screened cargo on passenger flights. This variability is most likely for domestic air carriers that have a mixed-size fleet of aircraft because a portion of their 50 percent screening requirement may be accomplished through the more stringent screening requirements for narrow-body aircraft, thus allowing them more flexibility in the amount of cargo to screen on wide-body aircraft. According to TSA, although this variability is possible, it is not a significant concern because of the small amount of cargo transported on narrow-body flights by air carriers with mixed-size fleets. However, the approach could result in variable percentages of screened cargo on passenger flights regardless of the composition of the fleet. As explained earlier, TSA is in the process of developing a data reporting system that may help to assess whether some passenger flights are transporting variable percentages of screened cargo. This issue regarding TSA’s current air cargo security approach will be further explored during our ongoing review. Lastly, TSA officials reported that cargo that has already been transported on one passenger flight may be subsequently transferred to another passenger flight without undergoing additional screening. According to TSA officials, the agency has determined that this is an approved screening method because an actual flight mimics one of TSA’s approved screening methods. For example, cargo exempt from TSA screening requirements that is transported on an inbound flight can be transferred to a domestic aircraft without additional screening, because it is considered to have been screened in accordance with TSA screening requirements. According to TSA, this scenario occurs infrequently, but the agency has not been able to provide us with data that allows us to assess how frequently this occurs. TSA reported that it is exploring ways to enhance the security of cargo transferred to another flight, including using canine teams to screen such cargo. This issue regarding TSA’s current air cargo security approach will be further explored during our ongoing review. Although industry participation in the CCSP is vital to TSA’s approach to spread screening responsibilities across the supply chain, it is unclear whether the number and types of facilities needed to meet TSA’s screening estimates will join the CCSP. Although TSA is relying on the voluntary participation of freight forwarders and shippers to meet the screening goals of the CCSP, officials did not have precise estimates of the number of participants that would be required to join the program to achieve 100 percent screening by August 2010. As of February 26, 2009, TSA had certified 172 freight forwarder CCSFs, 14 shipper CCSFs, and 17 independent cargo screening facilities (ICSF). TSA estimates that freight forwarders and shippers will complete the majority of air cargo screening at the August 2010 deadline, with shippers experiencing the largest anticipated increase when this mandate goes into effect. According to estimates reported by TSA in November 2008, as shown in figure 1, the screening conducted by freight forwarders was expected to increase from 14 percent to 25 percent of air cargo transported on passenger aircraft from February 2009 to August 2010, while the screening conducted by shippers was expected to increase from 2 percent to 35 percent. For this reason, increasing shipper participation in the CCSP is necessary to meet the 100 percent screening mandate. As highlighted in figure 1, TSA estimated that, as of February 2009, screening of cargo delivered for transport on narrow-body aircraft would account for half of the mandated 50 percent screening level and 25 percent of all cargo transported on passenger aircraft. TSA expected screening conducted on cargo delivered for transport on narrow-body passenger aircraft to remain stable at 25 percent when the mandate to screen 100 percent of cargo transported on passenger aircraft goes into effect. TSA anticipated that its own screening responsibilities would grow by the time the 100 percent mandate goes into effect. Specifically, TSA anticipated that its canine teams and transportation security officers would screen 6 percent of cargo in August 2010, up from 4 percent in February 2009. It is important to note that these estimates—which TSA officials said are subject to change—are dependent on the voluntary participation of freight forwarders, shippers, and other screening entities in the CCSP. If these entities do not volunteer to participate in the CCSP at the levels TSA anticipates, air carriers or TSA may be required to screen more cargo than was projected. Participation in the CCSP may appeal to a number of freight forwarders and shippers, but industry participants we interviewed expressed concern about potential program costs. In preliminary discussions with freight forwarders, shippers, and industry associations, stakeholders told us that they would prefer to join the CCSP and screen their own cargo in order to limit the number of entities that handle and open their cargo. This is particularly true for certain types of delicate cargo, including fresh produce. Screening cargo in the CCSP also allows freight forwarders and shippers to continue to consolidate their shipments before delivering them to air carriers, which results in reduced shipping rates and less potential loss and damage. However, TSA and industry officials with whom we spoke agreed that the majority of small freight forwarders—which make up approximately 80 percent of the freight forwarder industry—would likely find prohibitive the costs of joining the CCSP, including acquiring expensive technology, hiring additional personnel, conducting additional training, and making facility improvements. TSA has not yet finalized cost estimates for industry participation in air cargo screening, but is in the process of developing these estimates and is planning to report them later this year. As of February 26, 2009, 12 freight forwarders in 48 locations have joined TSA’s Air Cargo Screening Technology pilot and are thus eligible to receive reimbursement for the technology they have purchased. However pilot participants, to date, have been limited primarily to large freight forwarders. TSA indicated that it targeted high-volume facilities for the pilot in order to have the greatest effect in helping industry achieve screening requirements. In response to stakeholder concerns about potential program costs, TSA is allowing independent cargo screening facilities to join the CCSP and screen cargo on behalf of freight forwarders or shippers. However, it is unclear how many of these facilities will join. Moreover, according to industry stakeholders, this arrangement could result in freight forwarders being required to deliver loose freight to screening facilities for screening. This could reduce the benefit to freight forwarders of consolidating freight before delivering it to air carriers, a central part of the freight forwarder business model. TSA has taken some steps to develop and test technologies for screening and securing air cargo, but has not yet completed assessments of the technologies it plans to allow air carriers and program participants to use in meeting the August 2010 screening mandate. To date, TSA has approved specific models of three screening technologies for use by air carriers and CCSFs until August 3, 2010—ETD, EDS, and X-ray. TSA chose these technologies based on its subject matter expertise and the performance of these technologies in the checkpoint and checked baggage environments. According to TSA officials, the agency has conducted preliminary assessments, but has not completed laboratory or operational testing of these technologies in the air cargo environment. After the technology pilot programs and other testing are complete, TSA will determine which technologies will be qualified for screening cargo and whether these technologies will be approved for use after August 3, 2010. However, TSA is proceeding with operational testing and evaluations to determine which of these technologies is effective at the same time that screening entities are using these technologies to meet air cargo screening requirements. For example, according to TSA, ETD technology, which most air carriers and CCSFs plan to use, has not yet begun the qualification process. However, it is currently being used to screen air cargo as part of the Air Cargo Screening Technology Pilot and by air carriers and other CCSFs. Although TSA’s acquisition guidance recommends testing the operational effectiveness and suitability of technologies prior to deploying them, and TSA agrees that simultaneous testing and deployment of technology is not ideal, TSA officials reported that this was necessary to meet the screening deadlines mandated by the 9/11 Commission Act. While we recognize TSA’s time constraints, the agency cannot be assured that the technologies it is currently using to screen cargo are effective in the cargo environment, because they are still being tested and evaluated. We will continue to assess TSA’s technology issues as part of our ongoing review of TSA’s efforts to meet the mandate to screen 100 percent of cargo transported on passenger aircraft. Although TSA is in the process of assessing screening technologies, according to TSA officials, there is no single technology capable of efficiently and effectively screening all types of air cargo for the full range of potential terrorist threats. Moreover, according to industry stakeholders, technology to screen cargo that has already been consolidated and loaded onto a pallet or ULD may be critical to meet the 100 percent screening mandate. Although TSA has not approved any technologies that are capable of screening consolidated pallets or ULDs containing various commodities, according to TSA, it is currently beginning to assess such technology. TSA officials reported that they do not expect to qualify such technology prior to the August 2010 deadline. Air cargo industry stakeholders we interviewed also expressed some concerns regarding the cost of purchasing and maintaining screening equipment for CCSP participants. Cost is a particular concern for the CCSP participants that do not participate in the Air Cargo Screening Technology Pilot and will receive no funding for technology or other related costs; this includes the majority of CCSFs. Because the technology qualification process could result in modifications to TSA’s approved technologies, industry stakeholders expressed concerns about purchasing technology that is not guaranteed to be acceptable for use after August 3, 2010. We will continue to assess this issue as part of our ongoing review of TSA’s efforts to meet the mandate to screen 100 percent of cargo transported on passenger aircraft. In addition to the importance of screening technology, TSA officials noted that an area of concern in the transportation of air cargo is the chain of custody between the various entities that handle and screen cargo shipments prior to its loading onto an aircraft. Officials stated that the agency has taken steps to analyze the chain of custody under the CCSP, and has issued cargo procedures to all entities involved in the CCSP to ensure that the chain of custody of the cargo is secure. This includes guidance on when and how to secure cargo with tamper-evident technology. TSA officials noted that they plan to test and evaluate such technology and issue recommendations to the industry, but have not set any time frames for doing so. Until TSA completes this testing, however, the agency lacks assurances that existing tamper-evident technology is of sufficient quality to deter tampering and that the air cargo supply chain is effectively secured. We will continue to assess this issue as part of our ongoing review of TSA’s efforts to meet the mandate to screen 100 percent of cargo transported on passenger aircraft. Although the actual number of cargo TSIs increased each fiscal year from 2005 to 2009, TSA still faces challenges overseeing compliance with the CCSP due to the size of its current TSI workforce. To ensure that existing air cargo security requirements are being implemented as required, TSIs perform compliance inspections of regulated entities, such as air carriers and freight forwarders. Under the CCSP, TSIs will also perform compliance inspections of new regulated entities, such as shippers and manufacturers, who voluntarily become CCSFs. These compliance inspections range from an annual review of the implementation of all air cargo security requirements to a more frequent review of at least one security requirement. According to TSA, the number of cargo TSIs grew from 160 in fiscal year 2005 to about 500 in fiscal year 2009. However, cargo TSI numbers remained below levels authorized by TSA in each fiscal year from 2005 through 2009, which, in part, led to the agency not meeting cargo inspection goals in fiscal year 2007. As highlighted in our February 2009 report, TSA officials stated that the agency is still actively recruiting to fill vacant positions but could not provide documentation explaining why vacant positions remained unfilled. Additionally, TSA officials have stated that there may not be enough TSIs to conduct compliance inspections of all the potential entities under the CCSP, which TSA officials told us could number in the thousands, once the program is fully implemented by August 2010. TSA officials also indicated plans to request additional cargo TSIs in the future, although the exact number has yet to be formulated. According to TSA officials, TSA does not have a human capital or other workforce plan for the TSI program, but the agency has plans to conduct a staffing study in fiscal year 2009 to identify the optimal workforce size to address its current and future program needs. Until TSA completes its staffing study, TSA may not be able to determine whether it has the necessary staffing resources to ensure that entities involved in the CCSP are meeting TSA requirements to screen and secure air cargo. We will continue to assess this issue as part of our ongoing review of TSA’s efforts to meet the mandate to screen 100 percent of cargo transported on passenger aircraft. To meet the 9/11 Commission Act screening mandate as it applies to inbound cargo, TSA revised its requirements for foreign and U.S.-based air carrier security programs and began harmonization of security standards with other nations. The security program revisions generally require carriers to screen 50 percent of nonexempt inbound cargo. TSA officials estimate that this requirement has been met, though the agency is not collecting screening data from air carriers to verify that the mandated screening levels are being achieved. TSA has taken several steps toward harmonization with other nations. For example, TSA is working with foreign governments to improve the level of screening of air cargo, including working bilaterally with the European Commission (EC) and Canada, and quadrilaterally with the EC, Canada, and Australia. As part of these efforts, TSA plans to recommend to the United Nations’ International Civil Aviation Organization (ICAO) that the next revision of Annex 17 to the Convention of International Civil Aviation (due for release in 2009) include an approach that would allow screening to take place at various points in the air cargo supply chain. TSA also plans to work with the International Air Transport Association (IATA), which is promoting an approach to screening cargo to its member airlines. Finally, TSA continues to work with U.S. Customs and Border Protection (CBP) to leverage an existing CBP system to identify and target high-risk air cargo. However, TSA does not expect to achieve 100 percent screening of inbound air cargo by the August 2010 screening deadline. This is due, in part, to TSA’s inbound screening exemptions, and to challenges TSA faces in harmonizing its air cargo security standards with those of other nations. TSA requirements continue to allow screening exemptions for certain types of inbound air cargo transported on passenger aircraft. TSA could not provide an estimate of what percentage of inbound cargo is exempt from screening. In April 2007, we reported that TSA’s screening exemptions on inbound cargo could pose a risk to the air cargo supply chain and recommended that TSA assess whether these exemptions pose an unacceptable vulnerability and, if necessary, address these vulnerabilities. TSA agreed with our recommendation, but has not yet reviewed, revised, or eliminated any screening exemptions for cargo transported on inbound passenger flights, and could not provide a time frame for doing so. Furthermore, similar to changes for domestic cargo requirements discussed earlier, TSA’s revisions to inbound requirements could result in variable percentages of screened cargo on passenger flights to the United States. We will continue to assess this issue as part of our ongoing review of TSA’s efforts to meet the mandate to screen 100 percent of cargo transported on passenger aircraft. Achieving harmonization with foreign governments may be challenging, because these efforts are voluntary and some foreign countries do not share the United States’ view regarding air cargo security threats and risks. Although TSA acknowledges it has broad authority to set standards for aviation security, including the authority to require that a given percentage of inbound cargo be screened before it departs for the United States, TSA officials caution that if TSA were to impose a strict cargo screening standard on all inbound cargo, many nations likely would be unable to meet such standards in the near term. This raises the prospect of substantially reducing the flow of cargo on passenger aircraft or possibly eliminating it altogether. According to TSA, the effect of imposing such screening standards in the near future would be, at minimum, increased costs for international passenger travel and for imported goods, and possible reduction in passenger traffic and foreign imports. According to TSA officials, this could also undermine TSA’s ongoing cooperative efforts to develop commensurate security systems with international partners. TSA agreed that assessing the risk associated with the inbound air cargo transportation system will facilitate its efforts to harmonize security standards with other nations. Accordingly, TSA has identified the primary threats associated with inbound air cargo, but has not yet assessed which areas of inbound air cargo are most vulnerable to attack and which inbound air cargo assets are deemed most critical to protect. Although TSA agreed with our previous recommendation to assess inbound air cargo vulnerabilities and critical assets, it has not yet established a methodology or time frame for how and when these assessments will be completed. We continue to believe that the completion of these assessments is important to the security of inbound air cargo. Finally, the amount of resources TSA devotes to inbound compliance is disproportionate to the resources for domestic compliance. In April 2007, we reported that TSA inspects air carriers at foreign airports to assess whether they are complying with air cargo security requirements, but does not inspect all air carriers transporting cargo into the United States. Furthermore, in fiscal year 2008, inbound cargo inspections were performed by a cadre of 9 international TSIs with limited resources, compared to the 475 TSIs that performed domestic cargo inspections. By mid-fiscal year 2008, international compliance inspections accounted for a small percentage of all compliance inspections performed by TSA, although inbound cargo made up more than 40 percent of all cargo on passenger aircraft in 2007. Regarding inbound cargo, we reported in May 2008 that TSA lacks an inspection plan with performance goals and measures for its international inspection efforts, and recommended that TSA develop such a plan. TSA officials stated in February 2009 that they are in the process of completing a plan to provide guidance for inspectors conducting compliance inspections at foreign airports, and intend to implement the plan during fiscal year 2009. Finally TSA officials stated that the number of international TSIs needs to be increased. Madam Chairwoman, this concludes my statement. I look forward to answering any questions that you or other members of the subcommittee may have at this time. For questions about this statement, please contact Stephen M. Lord at (202) 512- 4379 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Steve D. Morris, Assistant Director; Scott M. Behen; Glenn G. Davis; Elke Kolodinski; Stanley J. Kostyla; Thomas Lombardi; Linda S. Miller; Yanina Golburt Samuels; Daren K. Sweeney; and Rebecca Kuhlmann Taylor. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Implementing Recommendations of the 9/11 Commission Act of 2007 mandates the Department of Homeland Security (DHS) to establish a system to physically screen 50 percent of cargo transported on passenger aircraft by February 2009 and 100 percent of such cargo by August 2010. This testimony provides preliminary observations on the Transportation Security Administration's (TSA) progress in meeting the mandate to screen cargo on passenger aircraft and the challenges TSA and industry stakeholders may face in screening such cargo. GAO's testimony is based on products issued from October 2005 through August 2008, and its ongoing review of air cargo security. GAO reviewed TSA's air cargo security programs, interviewed program officials and industry representatives, and visited two large U.S. airports. TSA has made progress in meeting the air cargo screening mandate as it applies to domestic cargo. TSA has taken steps that will allow screening responsibilities to be shared across the air cargo supply chain--including TSA, air carriers, freight forwarders (which consolidate cargo from shippers and take it to air carriers for transport), and shippers--although air carriers have the ultimate responsibility for ensuring that they transport cargo screened at the requisite levels. TSA has taken several key steps to meet the mandate, including establishing a new requirement for 100 percent screening of cargo transported on narrow-body aircraft; revising or eliminating most screening exemptions for domestic cargo; creating the Certified Cargo Screening Program (CCSP) to allow screening to take place at various points in the air cargo supply chain; and establishing a screening technology pilot. Although TSA estimates that it achieved the mandated 50 percent screening level by February 2009 as it applies to domestic cargo, the agency cannot yet verify that the requisite levels of cargo are being screened. It is working to establish a system to do so by April 2009. Also, TSA's screening approach could result in variable percentages of screened cargo on passenger flights. TSA and industry stakeholders may face a number of challenges in meeting the screening mandate, including attracting participants to the CCSP, and technology, oversight, and inbound cargo challenges. TSA's approach relies on the voluntary participation of shippers and freight forwarders, but it is unclear whether the facilities needed to meet TSA's screening estimates will join the CCSP. In addition, TSA has taken some steps to develop and test technologies for screening air cargo, but the agency has not yet completed assessments of these technologies and cannot be assured that they are effective in the cargo environment. TSA's limited inspection resources may also hamper its ability to oversee the thousands of additional entities that it expects to participate in the CCSP. Finally, TSA does not expect to meet the mandated 100 percent screening deadline as it applies to inbound air cargo, in part due to existing inbound screening exemptions and challenges it faces in harmonizing security standards with other nations.
The 2005 National Defense Strategy provided the strategic foundation for the 2006 QDR and identified an array of traditional, irregular, catastrophic, and disruptive challenges that threaten U.S. interests. To operationalize the defense strategy, the 2006 QDR identified four priority areas for further examination: defeating terrorist networks, defending the homeland, shaping the choices of countries at strategic crossroads, and preventing hostile state and non-state actors from acquiring or using weapons of mass destruction. These areas illustrated the types of capabilities and forces needed to address the challenges identified in the defense strategy and helped DOD to assess that strategy and review its force planning construct. Changes in the security environment and the force planning construct may require DOD and the services to reassess force structure requirements—how many units and of what type are needed to carry out the national defense strategy. Likewise, changing force structure requirements may create a need to reassess active end strength—-the number of military personnel annually authorized by Congress which each service can have at the end of a given fiscal year. The services allocate their congressionally authorized end strength among operational force requirements (e.g., combat and support force units), institutional requirements, and requirements for personnel who are temporarily unavailable for assignment. Operational forces are the forces the services provide to combatant commanders to meet mission requirements, such as ongoing operations in Iraq and Afghanistan. Institutional forces include command headquarters, doctrine writers, and a cadre of acquisition personnel, which are needed to prepare forces for combat operations. Personnel who are temporarily unavailable for assignment include transients, transfers, holdees, and students. The Secretary of Defense’s recent proposal to permanently increase the size of the Army and Marine Corps represents a significant shift in DOD’s plans, as reflected in the 2006 QDR. In fiscal year 2004, the Army’s authorized end strength was 482,400 active military personnel. Since that time, the Army has been granted authority to increase its end strength by 30,000 in order to provide flexibility to implement its transformation to a modular force while continuing to deploy forces to overseas operations. Rather than return the Army to the 482,400 level by fiscal year 2011, as decided in the 2006 QDR, DOD’s new proposal would increase the Army’s permanent end strength level to 547,000 over a period of 5 years. DOD’s plans for Marine Corps end strength have also changed. In fiscal year 2004, the Marine Corps’ was authorized to have 175,000 active military personnel. For the current fiscal year, Marine Corps end strength was authorized at 180,000, although DOD’s 2006 QDR planned to stabilize the Marine Corps at the 175,000 level by fiscal year 2011. The Secretary’s new proposal would increase permanent Marine Corps end strength to a level of 202,000 over the next 5 years. In terms of funding, DOD is currently authorized to pay for end strength above 482,400 and 175,000 for the Army and Marine Corps, respectively, with emergency contingency reserve funds or from supplemental funds used to finance operations in Iraq and Afghanistan. Until Congress receives the President’s budget request, it is unclear how DOD plans to fund the proposed increases. In 2004, the Army began its modular force transformation to restructure itself from a division-based force to a modular brigade-based force—an undertaking it considers the most extensive reorganization of its force since World War II. This initiative, according to Army estimates, will require an investment exceeding $52 billion through fiscal year 2011. The foundation of the modular force is the creation of standardized modular combat brigades in both the active component and National Guard. The new modular brigades are designed to be self-sufficient units that are more rapidly deployable and better able to conduct joint and expeditionary operations than their larger division-based predecessors. The Army planned to achieve its modular restructuring without permanently increasing its active component end strength above 482,400, in accordance with a decision reached during the 2006 QDR. The February 2006 QDR also specified that the Army would create 70 active modular combat brigades in its active component and National Guard. According to the Army, the modular force will enable it to generate both active and reserve component forces in a rotational manner. To do this, the Army is developing plans for a force rotation model in which units will rotate through a structured progression of increased unit readiness over time. For example, the Army’s plan is for active service members to be at home for 2 years following each deployment of up to 1 year. Both OSD and the military services play key roles in determining force structure and military personnel requirements and rely on a number of complex and interrelated process and analyses-–rather than on one clearly documented process. Decisions made by OSD can have a ripple effect on the analyses that are conducted at the service level, as I will explain later. OSD is responsible for the QDR–-the official strategic plan of DOD–-and provides policy and budget guidance to the services on the number of active personnel. The QDR determines the size of each services’ operational combat forces based on an analysis of the forces needed to meet the requirements of the National Defense Strategy. For example, the 2006 QDR specified the Army’s operational force structure would include 42 active Brigade Combat Teams and that the Army should plan for an active force totaling 482,400 personnel by fiscal year 2011. The QDR also directed the Marine Corps to add a Special Operations Command and plan for an active force totaling 175,000 military personnel by fiscal year 2011. In order to provide a military perspective on the decisions reached in the QDR, Congress required the Chairman of the Joint Chiefs of Staff to conduct an assessment of the review including an assessment of risk. In his assessment of the 2006 QDR, the Chairman concluded that the Armed Forces of the United States stood “fully capable of accomplishing all the objectives of the National Defense Strategy.” In his risk assessment, he noted that the review had carefully balanced those areas where risk might best be taken in order to provide the needed resources for areas requiring new or additional investment. Another area where the Office of the Secretary of Defense plays an important role is in developing various planning scenarios that describe the type of missions the services may face in the future. These scenarios are included in planning guidance that the Office of the Secretary of Defense provides the services to assist them in making more specific decisions on how to allocate their resources to accomplish the national defense strategy. For example, these scenarios would include major combat operations, stability operations, domestic support operations, and humanitarian assistance operations. GAO examined whether DOD had established a solid foundation for determining military personnel requirements. While we are currently reviewing the plans and analyses of the 2006 QDR, our February 2005 assessment of DOD’s processes identified several concerns that, if left uncorrected, could have hampered DOD’s QDR analysis. First, we found that DOD had not conducted a comprehensive, data-driven analysis to assess the number of active personnel needed to implement the defense strategy. Second, OSD does not specifically review the services’ requirements processes to ensure that decisions about personnel levels are linked to the defense strategy. Last, a key reason why OSD had not conducted a comprehensive analysis was that it has sought to limit personnel costs to fund competing priorities such as transformation. The Army uses a biennial, scenario-based requirements process to estimate the number and types of operational and institutional forces needed to execute Army missions. This process involves first determining the number and type of forces needed to execute the National Military Strategy based on OSD guidance, comparing this requirement with the Army’s present force structure, and finally reallocating military positions to minimize the risks associated with any identified shortfalls. Taken together, this process is known as “Total Army Analysis.” The Army has conducted this analysis for many years and has a good track record for updating and improving its modeling, assumptions, and related processes over time, as we noted in our last detailed analysis of the Army’s process. Active and reserve forces are included in this analysis in order to provide a total look at Army requirements and the total end strength available to resource those requirements. Also, the number of combat brigades is specified by OSD in the QDR and is essentially a “given” in the Army’s requirements process. The Army’s process is primarily intended to assess the support force structure needed to meet the requirements of the planning scenarios, and to provide senior leadership a basis for better balancing the force. In the first phase of the Army’s requirements determination process, the Army uses several models, such as a model to simulate warfights using the scenarios provided by OSD and the number of brigades OSD plans to use in each scenario. The outcomes of the warfighting analyses are used in further modeling to generate the number and specific types of units the Army would need to support its brigade combat teams. For example, for a prolonged major war, the Army needs more truck companies to deliver supplies and fuel to front line units. For other types of contingencies, the Army may use historical experience for similar operations to determine requirements since models are not available for all contingencies. The Army also examines requirements for the Army’s institutional force, such as the Army’s training base, in order to obtain a total bottom line requirement for all the Army’s missions. In the second phase of the Army’s process—known as the “resourcing phase”— the Army makes decisions on how to best allocate the active and reserve military personnel levels established by OSD against these requirements. In order to provide the best match, the Army will develop a detailed plan to eliminate lower priority units, and stand up new units it believes are essential to address capability gaps and meet future mission needs. For example, in recent years, the Army has made an effort to increase units needed for military police, civil affairs, engineers, and special operations forces. These decisions are implemented over the multiple years contained in DOD’s Future Years Defense Program. For example, the intent of the Army’s most recent analysis was used to help develop the Army’s budget plans for fiscal years 2008–2013. Historically, the Army has had some mismatches between the requirements it estimated and its available military personnel levels. When shortfalls exist, the Army has chosen to fully resource its active combat brigades and accept risk among it support units, many of which are in the reserve component. The Marine Corps uses a number of modeling, simulations, spreadsheet analyses, and other analytical tools to periodically identify gaps in its capabilities to perform its missions, and identify the personnel and skills needed to provide the capabilities based largely on the professional judgment of manpower experts and subject-matter experts. The results are summarized in a manpower document, updated as needed, which is used to allocate positions and personnel to meet mission priorities. This document is an assessment of what core capabilities can and cannot be supported within the authorized end strength, which may fall short of the needed actual personnel requirements. For example, in 2005, we reported that the Marine Corps analysis for fiscal year 2004 indicated that to execute its assigned missions, the Corps would need about 9,600 more personnel than it had on hand. At the time of this analysis, OSD fiscal guidance directed the Marine Corps to plan for an end strength of 175,000 in the service’s budget submission. Both the Army and Marine Corps are coping with additional demands that may not have been fully reflected in the QDR or their own service requirements analyses, which have been based on OSD guidance. While the Army’s most recent analysis, completed in 2006, concluded that Army requirements were about equal to available forces for all three components, this analysis did not fully reflect the effects of establishing modular brigades-–the most significant restructuring of the Army’s operational force structure since World War II. In addition, OSD-directed planning scenarios used by the Army in its analysis to help assess rotational demands on the force may not have reflected real-world conditions such as those in Iraq and Afghanistan. While the requirements phase of the Army’s analysis, completed in the mid-2005 time frame, may have been based on the best information available at the time, the Army’s transformation concepts have continued to evolve and the mismatch between the planning scenarios and actual operations are now more apparent. The Marine Corps is also undergoing a high pace of operations, as well as QDR-directed changes in force structure. This has caused the Marine Corps to initiate a new post-QDR analysis of its force structure requirements. The service analyses I am about to discuss all preceded the Secretary of Defense’s announcement of his intention to increase Army and Marine Corps end strength by 92,000. However, some of GAO’s reporting on these plans is only months old and should provide a baseline to help the committee understand what has changed since the 2006 QDR and service analyses were completed. The Army’s most recent requirements analysis, which examined force structure and military personnel needs for fiscal years 2008 through 2013, considered some of the Army’s new transformation concepts, such as the Army’s conversion to a modular force. However, the Army did not consider the full impact of this transformation in its analysis, in part because these initiatives were relatively new at the time the analysis was conducted. The Army’s recent analysis recognized that modular units would require greater numbers of combat forces than its prior division- based force and assumed this could be accomplished by reducing military positions in the Army’s institutional forces, such as its command headquarters and training base, rather than increasing the size of the active force. However, in our September 2006 report discussing the Army’s progress in converting to a modular force, we questioned whether the Army could meet all of its modular force requirements with a QDR- directed, active component end strength of 482,400. Under its new modular force, the Army will have 42 active brigades compared with 33 brigades in its division-based force. In total, these brigades will require more military personnel than the Army’s prior division-based combat units. Therefore, as figure 1 illustrates, the Army planned to increase its active component operational force—that is, its combat forces—from 315,000 to 355,000 personnel to fully staff 42 active modular brigade combat teams. To accomplish this with an active end strength level of 482,400, the Army had hoped to substantially reduce the size of its active component institutional force—that is, its training base, acquisition workforce, and major command headquarters—from 102,000 to 75,000 military personnel. The Army also planned to reduce the number of active Army personnel in a temporary status at any given time, known as transients, transfers, holdees, and students, or TTHS. In fiscal year 2000, this portion of the force consisted of 63,000 active military personnel and the Army planned to reduce its size to 52,400. While the Army has several initiatives under way to reduce active military personnel in its institutional force, our report questioned whether those initiatives could be fully achieved as planned. The Army had made some progress in converting some military positions to civilian positions, but Army officials believed additional conversions to achieve planned reductions in the noncombat force would be significantly more challenging to achieve and could lead to difficult trade-offs. In addition, cutting the institutional force at a time when the Army is fully engaged in training forces for overseas operations may entail additional risk not fully anticipated at the time the initiatives were proposed. Last, we noted that the Army is still assessing its modular unit concepts based on lessons learned from operations in Iraq and Afghanistan and other analyses led by the Army’s Training and Doctrine Command, and the impact on force structure requirements may not yet be fully known. We should note that, at the time of our report, the Army did not agree with our assessment of its personnel initiatives. In written comments on a draft of our report, the Army disagreed with our analysis of the challenges it faced in implementing its initiatives to increase the size of the operational force within existing end strength, noting that GAO focused inappropriate attention on these challenges. The Army only partially concurred with our recommendation that the Secretary of the Army develop and provide the Secretary of Defense and Congress with a report on the status of its personnel initiatives, including executable milestones for realigning and reducing its noncombat forces. The Army stated that this action was already occurring on a regular basis and another report on this issue would be duplicative and irrelevant. However, the reports the Army cited in its response were internal to the Army and the Office of the Secretary of Defense. The comments did not address the oversight needs of Congress. We stated in our report that we believe that it is important for the Secretary of Defense and Congress to have a clear and transparent picture of the personnel challenges the Army faces in order to fully achieve the goals of modular restructuring and make informed decisions on resources and authorized end strength. Another factor to consider is that the Army’s most recent requirements analysis was linked to planning scenarios directed by OSD and did not fully reflect current operational demands for Iraq and Afghanistan. The Army’s analysis assumed that the Army would be able to provide 18 to 19 brigades at any one time (including 14 active and 4 to 5 National Guard brigades) to support worldwide operations. However, the Army’s global operational demand for forces is currently 23 brigades and Army officials believe this demand will continue for the foreseeable future. The Marine Corps is also experiencing new missions and demands as a result of the Global War on Terrorism. However, these new requirements do not appear to have been fully addressed in its requirements analyses. The following is a summary of the principal analyses that were undertaken to address Marine Corps force structure personnel requirements in the past few years. In 2004, the Marine Corps established a Force Structure Review Group to evaluate what changes in active and reserve capabilities needed to be created, reduced, or deactivated in light of personnel tempo trends and the types of units in high demand since the Global War on Terrorism. As a result of this review, the Marine Corps approved a number of force structure changes including increases in the active component’s infantry, reconnaissance, and gunfire capabilities and decreases in the active component’s small-craft company and low-altitude air defense positions. However, this review was based on the assumption that the Marine Corps would need to plan for an end strength of 175,000 active personnel. As a result, the Marine Corps’ 2004 review focused mostly on rebalancing the core capabilities of the active and reserve component rather than a bottom-up review of total personnel requirements to meet all 21st century challenges. In March 2006, shortly after the QDR was issued, the Marine Corps Commandant formed a Capabilities Assessment Group to assess requirements for active Marine Corps military personnel. One of the group’s key tasks was to determine how and whether the Marine Corps could meet its ongoing requirements while supporting the QDR decision to establish a 2,600 personnel Marine Corps Special Operations Command. The group was also charged with identifying what core capabilities could be provided at a higher end strength level of 180,000. We received some initial briefings on the scope of the group’s work. Moreover, Marine Corps officials told us that the group completed its analysis in June 2006. However, as of September 2006 when we completed our work, the Marine Corps had not released the results of its analysis. Therefore, it is not clear whether and to what extent this review formed the basis for the President’s recent announcement to permanently expand the size of the active Marine Corps. Our past work has also shown that DOD has not provided a clear and transparent basis for its military personnel requests to Congress, requests that demonstrate a clear link to military strategy. Ensuring that the department provides a sound basis for military personnel requests and can demonstrate how they are linked to the military strategy will become increasingly important as Congress confronts looming fiscal challenges facing the nation. During the next decade, Congress will be faced with making difficult trade-offs among defense and nondefense-related spending. Within the defense budget, it will need to allocate resources among the services and their respective personnel, operations, and investment accounts while faced with multiple competing priorities for funds. DOD will need to ensure that each service manages personnel levels efficiently since personnel costs have been rising significantly over the past decade. We have previously reported that the average cost of compensation (including cash, non-cash, and deferred benefits) for enlisted members and officers was about $112,000 in fiscal year 2004. The growth in military personnel costs has been fueled in part by increases in basic pay, housing allowances, recruitment and retention bonuses, incentive pay and allowances, and other special pay. Furthermore, DOD’s costs to provide benefits, such as health care, have continued to spiral upward. As noted earlier, we have found that valid and reliable data about the number of personnel required to meet an agency’s needs are critical because human capital shortfalls can threaten an agency’s ability to perform its missions efficiently and effectively. Data-driven decisionmaking is one of the critical factors in successful strategic workforce management. High-performing organizations routinely use current, valid, and reliable data to inform decisions about current and future workforce needs. In addition, they stay alert to emerging mission demands and remain open to reevaluating their human capital practices. In addition, federal agencies have a responsibility to provide sufficient transparency over significant decisions affecting requirements for federal dollars so that Congress can effectively evaluate the benefits, costs, and risks. DOD’s record in providing a transparent basis for requested military personnel levels can be improved. We previously reported that DOD’s annual report to Congress on manpower requirements for fiscal year 2005 broadly stated a justification for DOD’s requested active military personnel, but did not provide specific analyses to support the justification. In addition, DOD’s 2006 QDR report did not provide significant insight into the basis for its conclusion that the size of today’s forces—both the active and reserve components across all four military services—is appropriate to meet current and projected operational demands. Moreover, the Marine Corps’ decision to initiate a new study to assess active military personnel requirements shortly after the 2006 QDR was completed is an indication that the QDR did not achieve consensus in required end strength levels. In evaluating DOD’s proposal to permanently increase active Army and Marine Corps personnel levels by 92,000 over the next 5 years, Congress should carefully weigh the long-term costs and benefits. It is clear that Army and Marine Corps forces are experiencing a high pace of operations due to both the war in Iraq and broader demands imposed by the Global War on Terrorism that may provide a basis for DOD to consider permanent increases in military personnel levels. However, it is also clear that increasing personnel levels will entail significant costs that must be weighed against other priorities. The Army has previously stated that it costs about $1.2 billion per year to increase active military personnel levels by 10,000. Moreover, equipping and training new units will require billions of additional dollars for startup and recurring costs. DOD has not yet provided a detailed analysis to support its proposal to permanently increase the size of the Army and Marine Corps. Given the significant implications for the nation’s ability to carry out its defense strategy along with the significant costs involved, additional information will be needed to fully evaluate the Secretary of Defense’s proposal. To help illuminate the basis for its request, DOD will need to provide answers to the following questions: What analysis has been done to demonstrate how the proposed increases are linked to the defense strategy? To what extent are the proposed military personnel increases based on supporting operations in Iraq versus an assessment of longer-term requirements? How will the additional personnel be allocated to combat units, support forces, and institutional personnel, for functions such as training and acquisition? What are the initial and long-term costs to increase the size of the force and how does the department plan to fund this increase? Do the services have detailed implementation plans to manage potential challenges such as recruiting additional personnel, providing facilities, and procuring new equipment? Our prior work on recruiting and retention challenges, along with our prior reports on challenges in equipping modular units, identify some potential challenges that could arise in implementing an increase in the size of the Army and Marine Corps at a time when the services are supporting ongoing operations in Iraq and Afghanistan. For example, we have reported that 19 percent of DOD’s occupational specialties for enlisted personnel were consistently overfilled while other occupational specialties were underfilled by 41 percent for fiscal years 2000 through 2005. In addition, we have reported that the Army is experiencing numerous challenges in equipping modular brigades on schedule, in part due to the demands associated with meeting the equipment needs of units deploying overseas. Such challenges will need to be carefully managed if Congress approves the Secretary of Defense’s proposal. Mr. Chairman and members of the subcommittee, this concludes my prepared remarks. I would be happy to answer any questions you may have. For questions about this statement, please contact Janet St. Laurent at (202) 512-4402. Other individuals making key contributions to this statement include: Gwendolyn Jaffe, Assistant Director; Kelly Baumgartner; J. Andrew Walker; Margaret Morgan; Deborah Colantonio; Harold Reich; Aisha Cabrer; Susan Ditto; Julie Matta; and Terry Richardson. Force Structure: Army Support Forces Can Meet Two-Conflict Strategy With Some Risks. GAO/NSIAD-97-66. Washington, D.C.: Feb. 28, 1997. Force Structure: Army’s Efforts to Improve Efficiency of Institutional Forces Have Produced Few Results. GAO/NSIAD-98-65. Washington, D.C.: Feb. 26, 1998. Force Structure: Opportunities Exist for the Army to Reduce Risk in Executing the Military Strategy. GAO/NSIAD-99-47. Washington, D.C.: Mar. 15, 1999. Force Structure: Army is Integrating Active and Reserve Combat Forces, but Challenges Remain. GAO/NSIAD-00-162. Washington, D.C.: July 18, 2000. Force Structure: Army Lacks Units Needed for Extended Contingency Operations. GAO-01-198. Washington, D.C.: Feb. 15, 2001. Force Structure: Projected Requirements for Some Army Forces Not Well Established. GAO-01-485. Washington, D.C.: May 11, 2001. Military Personnel: DOD Needs to Conduct a Data-Driven Analysis of Active Military Personnel Levels Required to Implement the Defense Strategy. GAO-05-200. Washington, D.C.: Feb. 1, 2005. Force Structure: Preliminary Observations on Army Plans to Implement and Fund Modular Forces. GAO-05-443T. Washington, D.C.: Mar. 16, 2005. Force Structure: Actions Needed to Improve Estimates and Oversight of Costs for Transforming Army to a Modular Force. GAO-05-926. Washington, D.C.: Sep. 29, 2005. Force Structure: Capabilities and Cost of Army Modular Force Remain Uncertain. GAO-06-548T. Washington, D.C.: Apr. 4, 2006. Force Structure: DOD Needs to Integrate Data into Its Force Identification Process and Examine Options to Meet Requirements for High-Demand Support Forces. GAO-06-962. Washington, D.C.: Sep. 5, 2006. Force Structure: Army Needs to Provide DOD and Congress More Visibility Regarding Modular Force Capabilities and Implementation Plans. GAO-06-745. Washington, D.C.: Sep. 6, 2006. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The war in Iraq along with other overseas operations have led to significant stress on U.S. ground forces and raised questions about whether those forces are appropriately sized and structured. In 2005, the Department of Defense (DOD) agreed with GAO's recommendation that it review military personnel requirements. The Office of the Secretary of Defense (OSD) concluded in its 2006 Quadrennial Defense Review (QDR) that the number of active personnel in the Army and Marine Corps should not change. However, the Secretary of Defense recently announced plans to increase these services' active end strength by 92,000 troops. Given the long-term costs associated with this increase, it is important that Congress understand how DOD determines military personnel requirements and the extent of its analysis. GAO has issued a number of reports on DOD's force structure and the impact of ongoing operations on military personnel, equipment, training, and related funding. This statement, which draws on that prior work, focuses on (1) the processes and analyses OSD and the services use to assess force structure and military personnel levels; (2) the extent to which the services' requirements analyses reflect new demands as a result of the changed security environment; and (3) the extent of information DOD has provided to Congress to support requests for military personnel. Both OSD and the military services play key roles in determining force structure and military personnel requirements and rely on a number of complex and interrelated analyses. Decisions reached by OSD during the QDR and the budget process about planning scenarios, required combat forces, and military personnel levels set the parameters within which the services can determine their own requirements for units and allocate military positions. Using OSD guidance and scenarios, the Army's most recent biennial analysis, completed in 2006, indicated that the Army's total requirements and available end strength were about equal. The Marine Corps' most recent assessment led to an adjustment in the composition and mix of its units. Both the Army and Marine Corps are coping with additional demands that may not have been fully reflected in OSD guidance, the QDR, or in recent service analyses. First, the Army's analysis did not fully consider the impact of converting from a division-based force to modular units, partly because modular units are a new concept and partly because the Army made some optimistic assumptions about its ability to achieve efficiencies and staff modular units within the QDR-directed active military personnel level of 482,400. Second, the Army's analysis assumed that the Army would be able to provide 18 to 19 brigades at any one time to support worldwide operations. However, the Army's global operational demand for forces is currently 23 brigades and Army officials believe this demand will continue for the foreseeable future. The Marine Corps' analyses reflected some new missions resulting from the new security environment. However, the Commandant initiated a new study following the 2006 QDR partly to assess the impact of requirements for a Special Operation Command. Prior GAO work has shown that DOD has not provided a clear and transparent basis for military personnel requests that demonstrates how they are linked to the defense strategy. GAO believes it will become increasingly important to demonstrate a clear linkage as Congress confronts looming fiscal challenges facing the nation and DOD attempts to balance competing priorities for resources. In evaluating DOD's proposal to permanently increase active Army and Marine Corps personnel levels by 92,000 over the next 5 years, Congress should carefully weigh the long-term costs and benefits. To help illuminate the basis for its request, DOD will need to provide answers to the following questions: What analysis has been done to demonstrate how the proposed increases are linked to the defense strategy? How will the additional personnel be allocated to combat units, support forces, and institutional personnel, for functions such as training and acquisition? What are the initial and long-term costs to increase the size of the force and how does DOD plan to fund this increase? Do the services have detailed implementation plans to manage potential challenges such as recruiting additional personnel, providing facilities, and procuring new equipment?
NNSA—a separately organized agency within DOE—has primary responsibility for ensuring the safety, security, and reliability of the nation’s nuclear weapons stockpile. NNSA carries out these activities at eight government-owned, contractor-operated sites: three national laboratories, four production plants, and one test site (see fig. 1). These sites, taken together, have been a significant component of U.S. national security since the 1940s. Contractors operate these sites under management and operations (M&O) contracts.the contractor with broad discretion in carrying out the mission of the particular contract, but grant the government the option to become much more directly involved in day-to-day M&O. Currently, NNSA’s workforce is made up of about 34,000 M&O contractor employees across the eight sites, and about 2,400 federal employees directly employed by NNSA in its Washington headquarters, at offices located at each of the eight sites, and at its Albuquerque, New Mexico, complex. Of the 71 EM and NNSA nonmajor projects we reviewed that were completed or ongoing for fiscal years 2008 to 2012, we were able to determine performance for 44 projects. Among these 44 projects, 21 have met or are expected to meet all three of their performance targets for the scope of work delivered, cost, and completion date, while 23 have not met or are not expected to meet one or more of their three targets. The remaining 27 of the 71 projects we reviewed had insufficiently documented performance targets or had modified scope targets, among other things, which prevented us from determining whether they met or were expected to meet their performance targets, according to our analysis of DOE data. Determining whether projects fully met or partially met performance targets was difficult because EM and NNSA did not always follow DOE requirements for documenting these targets. DOE has taken steps to ensure that EM and NNSA more clearly document performance targets for their projects, but some problems persist. Of the 71 nonmajor projects we reviewed, 44 projects—17 EM projects and 27 NNSA projects—had documented targets for scope, cost, and completion date, enabling us to determine their performance. Table 2 shows the expected or completed performance of these 44 EM and NNSA nonmajor projects. As the table shows, of the 44 projects for which we were able to determine performance, 21 projects met or are expected to meet their performance targets for scope, cost, and completion date. Specifically, 17 completed projects—5 EM and 12 NNSA projects—met all three of their performance targets. These projects included a $22 million EM project to expand an existing waste disposal facility at the Oak Ridge Reservation in Tennessee and a $469 million NNSA project to construct chemical, electrical, and other laboratories and workspaces at the Sandia National Laboratories in New Mexico. In addition, as of August 29, 2012, 4 ongoing projects—1 EM and 3 NNSA projects—were expected to meet all three of their performance targets, according to DOE estimates. These projects included a $77 million EM project to construct two disposal units for storing waste at the Savannah River Site in South Carolina and a $199 million NNSA project to equip the Radiological Laboratory/Utility/ Office Building at the Los Alamos National Laboratory in New Mexico to make it suitable for performing programmatic work. Table 2 also shows that 23 EM and NNSA projects did not meet or are not expected to meet one or more of their three performance targets. Of these 23 projects, 13 projects met or are expected to meet two of their three performance targets, and eight met or are expected to meet one of their three performance targets (see apps. II and III for more details). In addition, one project did not meet any of its performance targets. Specifically, EM’s project to decontaminate and decommission the Main Plant Process Building in West Valley, New York, did not complete all of its planned scope of work when it was completed in October 2011, almost 4 months after its completion date target and more than $50 million over its cost target of $46 million. EM cancelled the remaining project— Uranium-233 Disposition project, at the Oak Ridge Reservation in Tennessee—in December 2011 after spending approximately $225 million. (See apps. II and III for more details.) In assessing whether projects had achieved their scope, cost, and completion date targets, we followed DOE and Office of Management and Budget performance metrics.must be completed within their original scope target. For cost targets, For the scope, DOE states that projects Office of Management and Budget guidance and DOE performance metrics regard projects completed at less than 10 percent above their original cost targets as having achieved satisfactory performance. Regarding completion date, DOE’s performance metrics do not address targets for completion. However, because Office of Management and Budget guidance includes performance standards for project schedule, we considered projects to be on time if they were or are expected to be completed at less than 10 percent past their original completion date targets. We encountered two major problems in assessing the performance of the 44 projects described above, which made it more difficult for us and DOE to independently assess project performance. First, EM and NNSA did not consistently follow DOE requirements for documenting scope targets and tracking these targets using DOE’s performance database. Second, EM did not always establish credible completion date targets or conduct required independent reviews when it restructured its PBSs in 2010. Establishing a clearly defined target for scope is critical for an agency to accurately track and assess a project’s overall performance. In particular, a project’s scope of work directly affects estimates of the project’s cost and completion date. If the scope target is too broad or vaguely stated, it can be difficult to track whether or to what extent certain aspects of project scope were reduced or eliminated between CD-2 (when the baseline was established) and CD-4 (when the project was completed), potentially affecting cost and completion date targets. Accordingly, since 2003, DOE Order 413.3 has required that information on scope targets be documented in a project execution plan as part of CD-2. The current order also requires a project’s acquisition executive to sign a memorandum approving CD-2 that contains this information. In addition, the order requires that information on scope targets, as well as other critical performance information, be entered into DOE’s centralized database on project performance—the Project Assessment and Reporting System (PARS).OAPM, is used to track and report on project performance. This database, which is administered by However, EM and NNSA did not always follow the order’s requirement on documenting scope targets, and we encountered the following problems when we attempted to identify scope targets for EM and NNSA projects: Key project documents associated with CD-2 and identified in Order 413.3—project execution plans and CD-2 approval memorandums— often did not contain information on scope targets. Instead, we had to obtain and review a variety of other project documents to try and locate this information. For example, we obtained information on scope targets for several EM and NNSA projects from briefing slides (i.e., a PowerPoint presentation) prepared for a DOE advisory board involved in reviewing and approving projects. We also used independent project review reports, documents describing the functional and operational requirements of the projects, and contractor documents that provided detailed descriptions of a project’s scope of work, among other documents. (See app. I for more details on our scope and methodology.) The additional project documents that EM and NNSA provided to verify scope targets were often dated several months (or more) before or after the approval of CD-2. Because these documents were often not contemporaneous with the date when CD-2 was approved, we had difficulty determining whether any scope targets had changed during the interval. For example, we obtained information on scope targets for two EM projects from a project execution plan that was signed and dated almost 9 months after the CD-2 approval memorandum had been signed. (If documents were not dated within 1 year of CD-2 approval, we did not consider them sufficient and reliable for purposes of determining scope targets.) EM and NNSA often did not clearly or uniformly identify scope targets in their documents, instead providing this information in a variety of ways. For example, a NNSA project provided this information in briefing slides for a project advisory board under the headings “programmatic requirements summary” and “physical design summary.” In cases where we were unable to clearly identify scope targets, we relied on EM and NNSA officials to identify the information that they considered to represent scope targets. OAPM has encountered similar problems in trying to identify scope targets for projects at CD-2 and track how well completed projects had met these targets at CD-4. Specifically, OAPM officials told us that DOE program offices may have documented a project’s scope targets in a variety of project documents, such as a project execution plan or an acquisition strategy plan, rather than in an approval memorandum at CD- 2. As a result, OAPM officials said they occasionally must reconstruct a project’s scope targets from other contemporaneous planning documents near the time of the CD-2 approval. In a few instances, they said that no audit trail exists to compare scope targets established at CD-2 with the scope targets cited in a project’s CD-4 approval memorandum. In addition, an OAPM official told us that OAPM has not completed the process of locating and entering these data into PARS and continues to work with EM and NNSA to reconstruct project scope targets near the time of CD-2 approval. We found two problems with the performance baselines of many projects that EM restructured from its portfolio of PBS activities in 2009 and 2010. First, several projects did not have a credible completion date target. For example, EM’s April 2010 memorandum approving CD-2 for the Zone 1 Remedial Actions project, located in Oak Ridge, Tennessee, gave a target completion date of fiscal year 2017, which meant that the target completion date was at the end of the fiscal year (September 30, 2017), according to EM officials. However, EM approved the formal completion of this project, via a CD-4 approval memorandum, on September 30, 2011—6 years ahead of the completion date target identified in the CD-2 approval memorandum. In explaining this difference, EM officials stated that they had linked the target completion date for this project to the end of the existing PBS near-term baseline, which had been established before the restructuring process. According to EM officials, EM used this method because it already had a contract in place to conduct work activities associated with the PBS near-term baseline. Since the end of the PBS near-term baseline was to coincide with the end of the contract period, EM officials said that they did not think it would be appropriate to change the target completion dates. In addition, EM officials told us that they were more focused on finishing the scope of work for a given project within a specific dollar amount, and that it was not worth the additional expenditure of time and dollars to modify contracts to change the completion date. EM used this same methodology in establishing target completion dates for several other projects we examined. When we found that EM’s practice of establishing target completion dates did not provide a meaningful benchmark for assessing project performance, we had to locate additional documentation to establish a more credible completion date target. For example, for the Zone 1 Remedial Actions project, we reviewed additional documentation and decided that a more credible completion date target was December 15, 2011. As a result, for this and other projects, the completion date targets we used to evaluate the performance of some projects are different than the ones that DOE uses in its PARS database. Second, EM often established a new performance baseline and approved CD-2 for a project without having the baseline reviewed by an independent team of experts, as DOE Order 413.3 requires. Among other things, the review team is responsible for examining a project’s cost and completion date targets to ensure that they are credible and valid. However, EM did not conduct such reviews when it restructured some of the projects we examined. Instead, according to EM officials, EM relied on independent reviews conducted in the 2007 to 2008 time frame as part of the CD-2 approval process for PBS activities. In addition, EM officials said that it was not worth the additional expenditure of time and dollars to conduct new independent reviews for these projects. If EM had conducted new reviews as part of its restructuring process, it is possible that these reviews would have uncovered problems with some of the performance targets that EM had to correct later. Specifically, we identified two projects—the decontamination and decommissioning of the Paducah Gaseous Diffusion Plant in Paducah, Kentucky, and the Main Plant Process Building in West Valley, New York—for which EM had to significantly increase the cost targets it had approved 10 and 5 months earlier, respectively. In both cases, these cost increases were due to errors that EM project officials made in calculating total project cost. For example, for the Paducah Gaseous Diffusion Plant project, the project team did not incorporate additional project costs, including funds for contingencies and the contractor’s fee, into the project’s cost estimate. This omission resulted in underestimating the project’s cost target by about $8 million, or about 21 percent. In both cases, these errors increased the projects’ cost targets and caused EM to miss the original cost targets, according to our assessment of performance. We were unable to determine the extent to which 27 of the 71 nonmajor projects that EM and NNSA completed or had under way from fiscal year 2008 through fiscal year 2012 had met their scope, cost, and completion date targets for four reasons. First, EM and NNSA did not establish a performance baseline for eight projects. Second, EM and NNSA did not provide documentation that fully identified one or more performance targets—including targets for scope, cost, and completion date—for eight projects. Third, NNSA did not fully document a final project cost or a current completion date for three projects. Fourth, EM and NNSA modified the scope targets of eight projects after CD-2, rendering the original performance targets unusable for purposes of assessing performance. EM and NNSA did not establish a performance baseline for a total of 8 of the 71 nonmajor projects we reviewed that were completed or ongoing for fiscal years 2008 to 2012. Without a performance baseline, a project’s performance cannot be assessed. Specifically, we found the following: EM. EM did not establish a performance baseline for 6 of the 30 EM nonmajor projects we reviewed. According to EM documentation, when the office established near-term baselines for its PBS activities in the 2007 time frame, it decided that it would not establish a baseline for a few projects that were near completion or for which physical work was essentially complete, and remaining costs were low. Because EM had essentially completed all physical work before fiscal year 2008 on the 6 projects we identified, EM never established a performance baseline for these projects, according to EM officials. However, we included these projects in our review because EM did not formally approve the completion of these projects (via a CD-4 approval memorandum) until the 2010 to 2011 time frame, which meant that these projects would have been ongoing until that time. The combined cost of these 6 projects is approximately $1.5 billion. (See app. II for more details.) NNSA. NNSA did not establish a performance baseline for 2 of the 41 NNSA nonmajor projects we reviewed. According to NNSA documents and project officials, after a May 2000 wildfire damaged lands and buildings at the Los Alamos National Laboratory, NNSA formally authorized two emergency recovery efforts in July 2000. Because this authorization was granted outside of the critical decision process, NNSA did not establish formal performance targets for these projects. The combined cost of these two projects is $145 million. (See app. III for more details.) For 8 of the 71 nonmajor construction projects we reviewed, EM and NNSA did not fully define and document one or more performance targets for scope, cost, and completion date when they established and approved performance baselines for these projects at CD-2. Specifically, we found the following: For EM, 2 of the 30 EM nonmajor projects we reviewed did not have clearly defined and documented targets for scope and completion date, 1 project did not have a clearly defined and documented scope target, and 1 project did not have a clearly defined and documented completion date target. The combined cost of these 4 projects, 2 of which are ongoing, is estimated to be at least $182 million. (See app. II for more details.) For NNSA, 4 of the 41 NNSA nonmajor projects we reviewed did not have clearly defined and documented scope targets. The combined cost of these 4 projects, 2 of which are ongoing, is estimated to be $122 million. (See app. III for more details.) For 2 of the 71 nonmajor projects we reviewed, NNSA did not fully document the final project cost at CD-4. The final cost has not yet been settled for these 2 projects due to pending litigation with the contractor. NNSA estimated the combined cost of these 2 projects, both of which have been completed, to be $195 million. In addition, for the Nuclear Materials Safeguards and Security Upgrades, Phase II project at the Los Alamos National Laboratory, NNSA and contractor officials have determined that the project’s remaining construction costs will exceed the existing funds for the project and have halted work on the project. As a result, NNSA has not determined what the project’s revised completion date target will be. EM and NNSA modified the scope targets of 8 of the 71 projects we reviewed after approving them at CD-2. EM and NNSA used procedures to control and approve these modifications but did not establish new CD-2 performance targets. As a result, the scope modifications rendered the original CD-2 performance targets unusable for assessing project performance. We consider a project’s scope target to have been modified if, among other things, EM or NNSA increased the scope of the project after approving it at CD-2 or reduced the scope for programmatic reasons and provided a sound justification for this reduction. In contrast, if EM or NNSA reduced project scope solely to meet a project’s cost target, we did not consider the scope target to have been modified; rather, we considered the scope target not to have been met. (See apps. II and III for more information.) Projects for which EM or NNSA had modified the scope target sometimes exceeded expectations. For example, an NNSA project to build a highway at the Nevada National Security Site had an original scope target of 19.2 miles of highway and a cost target of about $14 million, but the project team completed an additional 12 miles of highway at an incremental cost of about $4 million, ahead of the original completion date target. However, because the scope target changed, the total cost of the project also changed, which made it unfair to judge the project’s performance against its original cost target. Table 3 provides an example of a NNSA project for which we consider the scope target to have been modified. We have previously reported on problems with the way DOE documents and tracks the scope of its projects, and DOE has taken actions to address this issue. In a 2008 report on DOE’s Office of Science, we noted concerns within DOE that projects sometimes had overly broad definitions of scope, making it difficult to determine the effects of a change in project To address this issue, we recommended that DOE consider scope.whether it could strengthen its project management guidance to help ensure that each project’s scope is clearly and sufficiently defined. DOE generally agreed with our recommendation and revised Order 413.3 in November 2010 to establish clearer requirements for identifying and documenting project scope at CDs 2 and 4. Specifically, the revised order requires a project’s acquisition executive to clearly identify the scope target in the documentation approving CD-2. In the documentation approving CD-4, when a project is declared complete, this official must clearly identify the scope accomplished and compare this scope with the target established at CD-2. To determine whether NNSA and EM had improved their documentation of scope targets since DOE revised Order 413.3, we identified two nonmajor projects for which NNSA and EM established performance targets in 2011. These projects are NNSA’s Sanitary Effluent Reclamation Facility Expansion project at the Los Alamos National Laboratory and EM’s Purification Area Vault project at the Savannah River Site. In reviewing project documentation, we found that both NNSA and EM had provided information on targets for scope, cost, and completion date in their memorandums approving CD-2. In particular, NNSA’s approval memorandum identified the following scope targets: (1) expand the existing Sanitary Effluent Reclamation Facility capacity to treat 300 gallons per minute of product water in an 18-hour day; (2) provide a 400,000-gallon product water storage tank, which provides a consistent supply of water to the cooling towers in the event the facility is off-line for maintenance; and (3) provide additional evaporation capacity. These scope targets provide a quantitative measure of how the project is to perform at completion, as required by DOE’s order. However, we found problems with the way EM documented the scope target for its Purification Area Vault project. EM’s approval memorandum provided a high-level description of the project’s scope, stating that the project will modify an existing portion of the K-Area Complex at the Savannah River Site in South Carolina to accommodate a vault; implement passive and active fire protection features as identified in the project fire hazards analysis; and install a new heating, ventilation, and air conditioning system. However, the scope target cited in the approval memorandum did not provide sufficient detail for measuring scope performance at project completion and, therefore, it may be difficult for an independent reviewer to accurately assess project performance. Specifically, we found the following: The first part of the scope target—construct a secure storage location for holding at least 500 containers—provides a quantitative measure of how the project is to perform at completion, as required by DOE’s order. The second part of the scope target—attain CD-4 approval for storage of containers—does not provide a quantitative measure and instead reflects a stage in DOE’s critical decision framework. Therefore, only one part of the scope target can be used to independently measure project performance regarding scope. The scope target only captures some of the elements of scope contained in the high-level scope description. As a result, the effect of any changes to these other elements of scope on project performance is unclear. For example, if EM decided not to fully implement the fire protection features identified in its hazards analysis, it is not clear whether EM or an independent reviewer would consider the project to have met its scope target. Only the scope target—as opposed to the other elements of scope in the high-level scope description—is currently being tracked in DOE’s centralized database for project performance. Given the other issues we identified with this scope target, an independent reviewer relying solely on information in DOE’s database may not have enough information to assess the project’s performance accurately. Several factors affected EM and NNSA in managing their nonmajor projects that were completed or ongoing from fiscal years 2008 to 2012. According to our interviews with project officials, these factors included the suitability of the acquisition strategy, contractor performance, and adherence to project management requirements. Because EM and NNSA carry out their work primarily through agreements with private contractors, a project’s acquisition strategy is a critical factor that affects the ability of these offices to properly manage their nonmajor projects. According to DOE guidance, an acquisition strategy is the high-level business management approach chosen to achieve project objectives within specified resource constraints. The acquisition strategy is the framework for planning, organizing, staffing, controlling, and leading a project. As part of this framework, agency officials have to choose the most appropriate contract alternative for a given project. Alternatives can include the use of multiple contractors to perform different tasks or the use of a prime contractor (such as the M&O contractor at a DOE site), who would be responsible for awarding subcontracts for different tasks. In addition, agency officials should identify the use of special procedures, such as the use of a “design-build” contract, whereby a single contract is awarded for both design work and construction, or the use of a “design-bid-build” contract, whereby separate contracts are awarded for the design and construction. Some EM and NNSA officials told us that the acquisition strategy was an important factor in the successful management of their projects. For example, EM retained a prime contractor to manage the Soil and Water Remediation–2012 project at the Idaho National Laboratory using a contract containing incentives based on cost and schedule performance. According to project officials, the fee structure under this acquisition strategy is relatively simple and gives the project team flexibility to tie incentives to different performance milestones across the multiple subprojects within the contract. These officials said that this contract structure has been a very effective tool in achieving performance goals. EM expects this project to meet its cost target of $743 million and its completion date target of September 2012, and officials said that they expect the contractor to receive its incentive fee, as called for in the contract. Other EM and NNSA officials cited the existence of an inadequate acquisition strategy as having a negative effect on the performance of their projects. For example, according to NNSA project officials at the Los Alamos site office, the M&O contractor at the Los Alamos National Laboratory decided to construct the Radiological Laboratory/Utility/Office Building project using a design-build acquisition strategy with a single prime subcontractor responsible for both design and construction. This approach was chosen based on the M&O contractor’s experience with constructing office buildings. However, the project also involved the construction of a radiological laboratory, which entailed the use of rigorous documentation standards to show that the project can meet nuclear quality assurance standards, among other things. Officials of the NNSA site office said that one of their key lessons learned would be to use a design-bid-build acquisition strategy if they had to manage a similar project in the future. The use of a design-bid-build contract would have offered several advantages over a design-build contract, according to these officials. First, it would have allowed NNSA staff more time to develop more robust project specifications and a more mature project design before having contractors bid on the construction of that design. Second, NNSA staff might have had more time to evaluate bids from contractors to see if they had the skills to construct the project. Third, with a more mature design, NNSA might have been able to reduce the number of federal staff and the time spent overseeing the project. This project was completed in June 2010, a few months after its completion date target. Its cost target was $164 million; however, the final cost of this project has not been determined because of ongoing litigation. According to the officials of the site office, NNSA withheld the M&O contractor’s performance incentive fee as a result of less than desirable contractor and subcontractor management during the design and construction of the facility. DOE has previously identified ineffective acquisition strategies as being among its top 10 management challenges. Specifically, in its 2008 root cause analysis, DOE reported that its acquisition strategies and plans were often ineffective.acquisition planning early enough in the process or devote the time and resources to do it well. Because contractors carry out the work associated with EM and NNSA nonmajor projects, contractor performance is a fundamental factor affecting EM’s and NNSA’s management of these projects. According to EM and NNSA project officials, poor contractor performance was a significant factor impeding their ability to successfully manage nonmajor projects. Among other things, officials cited concerns with finding qualified contractors that understood DOE’s nuclear safety requirements and maintained adequate internal control processes. Examples are as follows: Nuclear Facility Decontamination & Decommissioning – High Flux Beam Reactor Project, Brookhaven National Laboratory, New York: This project was completed in December 2010, more than a year ahead of its completion date target, and at a cost of $16 million, which was 31 percent higher than its cost target of $12 million. EM officials stated that the major factor increasing costs was that the contractor did not properly prepare for and pass internal safety reviews, which were necessary to demonstrate the contractor’s readiness to begin removal and disposal of key reactor components. Because the contractor required more time than originally planned to prepare for and pass internal safety reviews, work on the project was delayed, and the total project cost increased. Officials did not explain why the project was completed well ahead of its completion date target despite the delays encountered. Officials stated that one of the most important lessons learned was to better ensure earlier in the process that the contractor had a rigorous process in place (e.g., procedures and training) to demonstrate that their personnel were ready to perform the decontamination and decommissioning work. Because EM’s cleanup work at Brookhaven National Laboratory is performed under the Laboratory Management and Operations Contract under the purview of DOE’s Office of Science, EM officials said that they have provided information to the Office of Science to be included in the contractor’s overall performance evaluation. Process Research Unit Project, Niskayuna, New York: EM established a cost target of $79 million for this ongoing project. In September 2010, a contamination incident occurred while the contractor was performing open air demolition of a building at the site. According to DOE’s incident report, the contamination incident had two root causes: (1) the contractor failed to fully understand, characterize, and control the radiological hazard; and (2) the contractor failed to implement a work control process that ensured facility conditions supported proceeding with the work. As a result of this incident, as well as weather-related issues, the project has exceeded its cost target, and the project’s final cost and completion date depend on the outcome of negotiations between DOE and the contractor, according to project officials. Nuclear Materials Safeguards and Security Upgrades Project, Phase II, Los Alamos National Laboratory, New Mexico: This ongoing project is expected to meet its cost target of $245 million but not its completion date target of January 2013. NNSA used a design-bid- build acquisition strategy for this project, with one contractor responsible for designing the project, and another contractor responsible for construction activities. According to project officials, during the construction phase, the building contractor had to stop work when it discovered errors with the design of the project. Specifically, officials told us the designs contained an erroneous elevation drawing that did not adequately account for the presence of a canyon and a pipeline containing radioactive liquid waste on the north side of the project site. In addition, other construction subcontractors, whose work was to be performed in sequence, had to wait to begin their work. As a result of these problems, the design contractor spent considerable time redesigning the project, according to project officials, and NNSA has had to award additional funding and schedule time to the construction contractors to compensate for the inadequate design. All told, officials told us the additional costs resulting from redesign and the delay of construction ranged from $15 million to $20 million. In addition, NNSA and contractor officials recently determined that the project’s remaining construction costs will exceed the existing funds for the project and have halted work. As a result, NNSA has not determined what the project’s revised completion date target will be. Effective project management also depends on having project officials consistently follow DOE’s project management requirements. Among other things, these requirements are aimed at ensuring that projects (1) have a sufficiently mature design before establishing performance targets and beginning construction activities; (2) have had their earned value management systems certified for more accurate reporting on performance; (3) undergo a review by an independent group of experts before beginning construction activities; and (4) maintain an adequate process to account for any significant changes to the project’s scope, cost, or completion date targets (known as a change control process). DOE has previously identified adherence to project management requirements as among its top 10 management challenges, stating in its 2008 root cause analysis that the agency has not ensured that these requirements are consistently followed. That is, in some instances, projects are initiated or carried out without fully complying with the processes and controls contained in DOE policy and guidance. We found a similar problem with adherence to DOE project management requirements in some of the projects we reviewed, although these problems were more often associated with EM projects than with NNSA projects. Specifically, in half of the 10 EM projects we reviewed in depth, officials cited a lack of adherence to project requirements, particularly not having a sufficiently mature design when establishing performance targets and beginning work activities, as a significant factor impeding their ability to manage projects within the performance baseline. For example, EM’s project to convert depleted uranium hexafluoride into a more stable chemical form at two locations—Paducah, Kentucky and Portsmouth, Ohio—was completed in November 2010, more than 2 years after its completion date target and more than $200 million over its cost target of $346 million. A lessons-learned report, completed in 2009 at the request of DOE, concluded that DOE’s critical decision process had become a “mere rubber stamp of approval.”Project had results consistent with its level of definition at the time of project commitment and execution start. Future DOE projects will likely demonstrate similar performance unless they are better defined at the start of detailed design and they follow not only the letter of DOE’s [critical decision] process, but also its spirit.” According to EM officials, EM withheld the construction contractor’s incentive fee due to its poor performance. It stated: “In the end, the … In contrast, among the 10 NNSA projects we reviewed in depth, several NNSA project managers credited adherence to project management processes as contributing positively to project performance. The advantages of adhering to project management processes are illustrated by one of the projects we reviewed—the Ion Beam Laboratory project at Sandia National Laboratories, New Mexico. This project—to use ion beams to qualify electronics and other nonnuclear weapon components for use in the nuclear stockpile—was completed ahead of schedule in September 2011 at a cost of $31 million, which was 22 percent lower than its cost target of $40 million. Project officials stated that implementing a procedure to control any changes to the performance baseline and a Baseline Change Control Board served as the foundation to manage all changes to ensure that cost, schedule, and technical aspects were evaluated to meet the mission of the project. In addition, project officials made active use of earned value management data, with several officials noting that applying earned value management principles on a regular basis assisted the project team in taking management actions to keep the project on track. For example, the Sandia Project Manager provided monthly reports to Sandia senior managers and the federal project director to communicate the project’s progress, the accomplishment of milestones, financial outlays, project issues, and appropriate corrective actions. Owing to the project’s success in meeting its performance targets, NNSA did not withhold any fee from the contractor. EM’s eight workforce plans for its federal workforce do not consistently identify (1) mission-critical occupations and skills and (2) current and future shortfalls in these areas. As shown in table 4, of the eight EM workforce plans, one fully identifies both mission-critical occupations and mission-critical skills; another four plans identify either mission-critical occupations or mission-critical skills, but not both; and four of the eight plans identify current and future shortfalls in mission-critical occupations. EM’s workforce plans may not consistently identify mission-critical occupations and skills and shortfalls in these areas in part because EM’s Office of Human Capital has not established a consistent set of terms that all EM sites use to define and describe mission-critical occupations and skills, according to our analysis. Instead, the five EM workforce plans that identified or partially identified these occupations and skills (as shown in table 4) used different terms to identify them. The plans also differed in the number and type of occupations or skills identified as mission-critical. For example, two plans identified three such occupations and skills, while another identified 20 different job series associated with 40 different position titles. Table 5 shows the variations in mission-critical occupations and skills identified in these five EM workforce plans. When we brought the issue of inconsistent terminology to the attention of EM officials, they agreed that it would be useful to establish a consistent set of terms for mission-critical occupations and skills and told us that they plan to address this issue in the fiscal year 2013 planning cycle. However, we note that EM’s guidance to its site offices already instructed them to describe shortfalls and surpluses in the skills most critical to site performance; nonetheless, not all site offices did so. Notwithstanding the variations in terms for mission-critical occupations and skills in EM’s workforce plans, many of the plans indicate that EM’s federal workforce may soon face shortfalls in a number of important areas, including project and contract management. Examples are as follows: The Portsmouth/Paducah Project Office’s plan states that the office will need more staffing, including in project management and contracting, to meet mission needs in future years. Specifically, the plan notes that 31 percent of its current federal workforce could retire by fiscal year 2017, including up to 67 percent of its contract specialists and up to 64 percent of its general engineers. The workforce plan for EM headquarters, issued in July 2011, stated that 26 percent of its federal workforce was currently eligible to retire, with an additional 22 percent of the workforce projected to become eligible for retirement by fiscal year 2015. The EM headquarters plan projected that 60 percent of contracting officers would be eligible for retirement by fiscal year 2015. The Idaho Operations Office workforce plan states that a significant number of federal employees in leadership and mission-critical positions were already eligible for retirement at the end of fiscal year 2011, but the plan does not specify the number or positions of these employees. The Carlsbad Field Office workforce plan indicates that both of that office’s “contract/procurement specialists” will be eligible to retire by fiscal year 2017, along with 10 of its 15 general engineers. The workforce plan for the Office of River Protection, which manages the storage, retrieval, treatment, and disposal of tank waste at the Hanford Site in Washington State, projects that the office will face a 61 percent shortfall in “contracting” and a 53 percent shortfall in “project management” by fiscal year 2017. EM officials said that they recognize the need to better identify mission- critical occupations and skills and shortfalls in these areas, and that they have taken a number of steps to address these issues. For example, officials in EM’s Office of Human Capital told us that they conducted a skills assessment in 2010 that helped EM identify key occupational series to target in its succession planning efforts. In addition, officials in this office told us that they are actively engaged in mitigating the risk of having a large number of EM federal employees retire in the near future by developing a voluntary separation incentive plan and voluntary early retirement plan. If employees eligible for retirement participate in this plan, EM could fill vacated positions with younger employees who could develop their skills in future years. Moreover, officials in EM’s Office of Acquisition and Project Management told us that to ensure that each project team has the skilled staff it needs to meet project goals, they consult with the EM officials in charge of each project team, consider the project’s execution plan, and use DOE staffing guidance as a tool to inform staffing decisions. EM officials also said that EM sites serve diverse functions and that, therefore, some variation in the workforce plans and their descriptions of mission-critical occupations and competencies is to be expected. Nonetheless, without a workforce plan or summary document presenting a consistent set of occupations and skills that are critical to every site office’s mission, such as project and contract management, it is difficult for DOE and us to understand EM’s most critical current and future human capital needs. The 71 nonmajor projects that we reviewed cost an estimated $10.1 billion and are critical to DOE’s mission to secure the nation’s nuclear weapons stockpile and manage the radioactive waste and contamination that resulted from the production of such weapons. EM and NNSA are making some progress in managing these projects. For example, we identified some NNSA and EM nonmajor projects that used sound project management practices, such as the application of effective acquisition strategies, to help ensure the successful completion of these projects. However, some contract and project management problems persist. Specifically, both EM and NNSA have approved the start of construction and cleanup activities for some nonmajor projects without clearly defining and documenting performance targets for scope, cost, or completion date in the appropriate CD-2 documentation, as required by DOE’s project management order. In addition, EM and NNSA have not consistently tracked project performance, particularly for scope, in DOE’s centralized database for tracking and reporting project performance, as required by DOE’s project management order. Moreover, EM has approved new performance targets for projects without ensuring that these targets are reviewed by an independent team of experts, as required by DOE’s project management order. Without clearly defining and documenting a project’s performance targets and tracking performance against these targets through project completion, and without ensuring that projects are independently reviewed, neither DOE nor we can determine whether the department is truly delivering on its commitments when its contractors complete work on its projects. Problems also persist regarding DOE’s workforce—specifically, its current and potential shortfalls in federal personnel with the skills necessary to manage its contracts and projects, an issue that has received attention in our high-risk list. EM recognizes the need to address this issue and has taken steps to do so, such as conducting succession planning based on an assessment of key skills, as well as having EM’s Office of Project Management consult with EM’s project teams to ensure that the project teams have the skilled personnel they need to execute projects successfully. However, EM does not consistently identify in its workforce plans the occupations and skills most critical to the agency’s mission, as well as current and future shortfalls in these areas. This issue is compounded by EM’s decentralized planning process, in which site offices produce their own workforce plans that do not use consistent terminology and are not aggregated centrally by EM headquarters into a single workforce plan or summary document. Some variation among site- specific workforce plans is to be expected, but EM officials have stated that it would be useful to establish a consistent set of terms for mission- critical occupations and skills and told us that they plan to address this issue in the fiscal year 2013 planning cycle. That said, previous EM guidance for workforce planning specified that the plans describe shortfalls and surpluses in the skills most critical to site performance, but not all of EM’s plans did so. Without a summary document or single workforce plan presenting a consistent set of occupations and skills that are critical to every site office’s mission, such as project and contract management, using consistent terms, it is difficult for DOE or us to understand EM’s most critical current and future human capital needs. To ensure that DOE better tracks information on its nonmajor projects, including the extent to which these projects meet their performance targets, and that EM consistently identifies mission-critical occupations and skills, as well as any current and future shortfalls in these areas, in its workforce plans, we recommend that the Secretary of Energy take the following five actions: Ensure that the department clearly defines performance targets— including targets for scope, cost, and completion date—for each of its projects and documents the targets in appropriate CD-2 documentation, as is required by DOE’s project management order. Ensure that the department tracks the performance of its projects using the performance targets, particularly scope, it establishes for its projects, as is required by DOE’s project management order. Ensure that each project is reviewed by an independent team of experts before the department approves performance targets, as is required by DOE’s project management order. Direct EM to develop a summary document or a single workforce plan that contains information on mission-critical occupations and skills, as well as current and potential future shortfalls in these areas, for all EM sites. Ensure that EM follows through on its plan to address the use of consistent terms across all EM sites for mission-critical occupations and skills. We provided a draft of this report to DOE for review and comment. In written comments, DOE agreed with our recommendations. DOE’s written comments are reprinted in appendix IV. DOE also provided technical clarifications, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Energy, the appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To determine the extent to which the Department of Energy’s (DOE) Office of Environmental Management (EM) and National Nuclear Security Administration (NNSA) nonmajor projects have met their scope, cost, and completion date targets, we obtained performance information on 71 nonmajor projects. These 71 nonmajor projects included 30 EM projects and 41 NNSA projects that were either: (1) completed (i.e., reached critical decision 4) from fiscal year 2008 to fiscal year 2011 or (2) ongoing from fiscal year 2008 to fiscal year 2011 and for which EM and NNSA had established performance baselines at critical decision 2. We also collected performance information for ongoing projects for fiscal year 2012. The total estimated cost of these 71 projects is approximately $10.1 billion. The names and locations of these 71 projects are provided in apps. II and III. We excluded the following types of projects from our review: (1) major projects, or those projects that each cost more than $750 million; (2) EM projects funded entirely by the American Recovery and Reinvestment Act of 2009 because of a separate GAO review looking at these projects; (3) information technology acquisitions; and (4) We identified these projects using DOE’s Project operational activities.Assessment and Reporting System (PARS). To assess the reliability of PARS data, we interviewed officials about the system and reviewed relevant documents. On the basis of this information, we determined that the system has adequate and sound controls for entering and maintaining data. We also conducted electronic testing on the specific data fields of interest, including cost, schedule, and scope targets. We determined that the cost and schedule data were complete and sufficiently reliable for our purposes; however, we found the scope data to be incomplete. Through interviews with officials, we ascertained that the scope data were not missing because of a system or data entry problem; instead, because EM and NNSA had not consistently identified and documented scope targets for the 71 projects we reviewed, these data could not be entered into PARS. Therefore, we obtained data on project scope, cost, and schedule directly from EM and NNSA officials. For the 71 nonmajor projects, we reviewed selected documents providing information about the projects’ targets for scope, cost, and completion date. We relied on DOE Order 413.3 for requirements on (1) specifying the scope, cost, and schedule targets for a project’s performance baseline and (2) documenting the performance baseline.requirements, we reviewed the relevant documentation (including critical decision memoranda and project execution plans) and compared the performance targets established for scope, cost, and schedule with the actual performance of completed projects and the expected performance of ongoing projects. For completed projects, we compared the performance targets for scope, cost, and schedule—as documented in critical decision 2 (CD-2) approval memorandum and project execution plans—with the completed scope, actual costs, and approval dates as documented in critical decision 4 (CD-4) approval memorandum. For ongoing projects, we compared the performance targets for scope, cost, and schedule with DOE project performance reports; we also had officials from EM, NNSA, and DOE’s Office of Acquisition and Project Management review performance information as of August 29, 2012. In cases where key project documents—including the CD-2 and CD-4 approval memoranda and project execution plans—did not identify all three performance targets for scope, cost, and completion, we requested and reviewed alternative project documents. These included, among other things: independent project review reports; briefing slides prepared for DOE advisory boards; contractor work packages; DOE documents listing the functional and operational requirements of projects; memoranda used to request approval of changes to project baselines; final acceptance reports documenting that contractors delivered project requirements; and DOE quarterly and monthly status reports on ongoing projects. When reviewing alternative project documents, we requested documents dated as close to CD-2 and CD-4 as possible. If documents were not dated within 1 year of CD-2 approval, we did not consider them sufficient and reliable for purposes of determining scope targets. In keeping with our prior work, and in recognition of Office of Management and Budget guidance and DOE’s project performance goals, we characterized nonmajor projects that met or exceeded (or are expected to meet or exceed) their cost and schedule targets by less than 10 percent as completed within budget and on time, whereas we considered projects that exceeded (or will exceed) their targets by 10 percent or more to be over cost or late.whether a project had successfully met its scope target. Projects that reduced their scope target to meet their cost targets were considered not to have met their scope targets. In a few cases, EM and NNSA increased the scope of work associated with a project after establishing performance targets at CD-2; in these cases, we noted that these projects had been modified and did not calculate whether they had met or exceeded their original cost and schedule targets. In addition, we considered To evaluate factors affecting EM’s and NNSA’s management of nonmajor projects, we selected a nongeneralizable sample of 20 out of the 71 projects—including 10 EM projects and 10 NNSA projects—for more detailed review. The names of these 20 projects are provided in apps II and III. Results from nonprobability samples, including our sample of 20 projects, cannot be used to make inferences about EM’s and NNSA’s overall project performance or generalized to projects we did not include in our sample. We were interested in gathering information on the selected projects to identify material factors that may not exist across all projects but could help us understand EM’s and NNSA’s organization strengths and potential challenges. We selected these 20 projects to ensure that our sample included completed and ongoing projects, with a wide range of project costs. Together, the 20 projects represented about $4.1 billion, or approximately 41 percent, of the total value of the 71 projects. For these 20 projects, we developed a structured interview template to identify the key factors that affected the management of these projects. We used three primary sources in developing this structured interview template—GAO’s cost guide, DOE’s Order 413.3, and DOE’s guidance document on conducting project reviews. The structured interview template focused on certain aspects of project management, such as the preparation of project designs, risk estimates, and cost and schedule targets, as well as the adherence to DOE project management requirements. We pretested the structured interview template during a site visit to the Y-12 National Security Complex and the Oak Ridge Reservation near Oak Ridge, Tennessee. At each site, we selected six projects and interviewed relevant EM and NNSA federal project directors and other knowledgeable staff using the structured interview template. Based on our pretesting, we revised the structured interview template and conducted 20 interviews with the relevant EM and NNSA federal project directors and other knowledgeable staff to gather their perspectives on their projects’ performance and reasons for it. To evaluate the extent to which EM’s workforce plans identify mission- critical occupations and skills and any current and future shortfalls in these areas, we examined EM’s strategic workforce plans for its headquarters and site office staff, DOE’s corrective action plan for contract and project management, and the Office of Personnel Management’s Human Capital Assessment and Accountability Framework. Specifically, we obtained the eight EM workforce plans, prepared by EM headquarters, the Consolidated Business Center, the Richland Operations Office and Office of River Protection (which manage operations at the EM site in Hanford, Washington), the Portsmouth/Paducah Site Office, the Savannah River Operations Office, the Idaho Operations Office, the Carlsbad Field Office, and the Oak Ridge Office. We reviewed these plans in their entirety, and also searched for relevant terms, to determine the extent to which the plans identified mission-critical occupations and skills and any current and future shortfalls in these areas. In addition to our document review, we interviewed DOE and EM officials with knowledge of EM’s practices in workforce planning, including officials in EM’s Office of Acquisition and Project Management and Office of Human Capital and Corporate Services. We conducted these interviews to determine how EM develops its workforce plans and to obtain EM officials’ points of view regarding the state of the EM workforce. We conducted this performance audit from June 2011 to December 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We obtained and reviewed performance information on 30 EM nonmajor projects that were either completed or ongoing from fiscal year 2008 through fiscal year 2012. Table 6 summarizes this information for 17 EM projects for which we could determine performance. Table 7 summarizes this information for 6 projects for which EM did not establish performance targets. Table 8 summarizes this information for 4 EM projects with incomplete documentation of their performance targets, which meant that we could not determine performance. Table 9 summarizes this information for 3 projects for which EM modified the scope after establishing performance targets for these projects, rendering the original performance targets unusable for purposes of assessing performance. We obtained and reviewed performance information on 41 NNSA nonmajor projects that were either completed or ongoing from fiscal year 2008 through fiscal year 2012. Table 10 summarizes this information for 27 NNSA projects for which we could determine performance. Table 11 summarizes this information for 2 projects for which NNSA did not establish performance targets. Table 12 summarizes this information for 7 NNSA projects with incomplete documentation of their performance targets or final cost, which meant that we could not determine performance. Table 13 summarizes this information for 5 projects for which NNSA modified the scope after establishing performance targets for these projects, rendering the original performance targets unusable for purposes of assessing performance. In addition to the individual named above, Dan Feehan, Assistant Director; Sandra Davis; Robert Grace; and Jason Holliday made key contributions to this report. Also contributing to this report were John Bauckman; Jennifer Echard; Cindy Gilbert; Steven Lozano; Minette Richardson; Cheryl Peterson; and Carol Hernnstadt Shulman.
As of February 2011, EM and NNSA remained on GAO's high-risk list for contracting and project management. These two offices manage numerous construction and cleanup projects that each cost less than $750 million and are called nonmajor projects. DOE requires its program offices to establish performance targets for the expected scope, cost, and completion date of each project before starting construction or cleanup. GAO has encouraged federal agencies to use strategic workforce planning to help them meet present and future mission requirements. Two key elements of workforce planning are to identify mission-critical occupations and skills and any current and future shortfalls in these areas. GAO was asked to examine the (1) extent to which EM and NNSA nonmajor projects have met their scope, cost, and completion date targets, (2) factors affecting EM's and NNSA's management of nonmajor projects, and (3) extent to which EM's workforce plans identify mission-critical occupations and skills and any current and future shortfalls in these areas. GAO reviewed DOE documents and project data, examined EM workforce plans, toured selected DOE facilities, and interviewed DOE officials. Of the 71 nonmajor projects that the Department of Energy's (DOE) Office of Environmental Management (EM) and National Nuclear Security Administration (NNSA) completed or had under way from fiscal years 2008 to 2012, 21 met or are expected to meet their performance targets for scope, cost, and completion date. These projects included a $22 million EM project to expand an existing waste disposal facility at the Oak Ridge Reservation in Tennessee and a $199 million NNSA project to equip a radiological laboratory and office building at the Los Alamos National Laboratory in New Mexico. Another 23 projects did not meet or were not expected to meet one or more of their three performance targets for scope, cost, and completion date. Among these, 13 projects met or are expected to meet two targets, including a $548 million NNSA project to shut down a nuclear reactor in Russia for nonproliferation purposes; 8 projects met or are expected to meet one target; 1 project did not meet any of its targets; and 1 project was cancelled. Of the remaining 27 projects, many had insufficiently documented performance targets for scope, cost, or completion date, which prevented GAO from determining whether they met their performance targets. EM and NNSA often did not follow DOE requirements for documenting these performance targets, making it more difficult for GAO and DOE to independently assess project performance. Several factors affected EM's and NNSA's management of their nonmajor projects that were completed or ongoing from fiscal years 2008 to 2012. These factors included the suitability of a project's acquisition strategy, contractor performance, and adherence to project management requirements. For example, EM officials managing an ongoing project to remediate soil and water at the Idaho National Laboratory used an acquisition strategy that tied incentives for the contractor to different performance milestones across the multiple subprojects within the contract, which will help the project meet its performance goals, according to EM officials. In contrast, NNSA encountered problems meeting its performance goals for a project to build an office building and radiological laboratory at the Los Alamos National Laboratory partly due to its acquisition strategy. According to NNSA project officials at the Los Alamos site office, the project team should have hired one contractor to design the project and solicited bids from other contractors to build the project rather than using the same contractor for both activities. The former strategy might have resulted in a more mature project design and more time to evaluate various contractors' qualifications to construct the project, according to the NNSA project officials. EM's workforce plans do not consistently identify mission-critical occupations and skills and current and future shortfalls in these areas for its federal workforce. In addition, many EM workforce plans indicate that EM may soon face shortfalls in a number of important areas, including project and contract management. EM officials said that they recognize these issues and have taken a number of steps to address them, including conducting a skills assessment to identify key occupational series to target for succession planning. However, the inconsistent terms used to describe mission-critical occupations and skills in EM's workforce plans make it difficult for GAO and DOE to understand EM's most critical needs regarding its workforce. GAO recommends that EM and NNSA clearly define, document, and track the scope, cost, and completion date targets for each of their nonmajor projects and that EM clearly identify critical occupations and skills in its workforce plans. EM and NNSA agreed with GAO's recommendations. GAO recommends that EM and NNSA clearly define, document, and track the scope, cost, and completion date targets for each of their nonmajor projects and that EM clearly identify critical occupations and skills in its workforce plans. EM and NNSA agreed with GAO’s recommendations.
To identify, pursue, and develop new technologies to improve and enhance military capabilities, DOD relies on its science and technology (S&T) community which is comprised of DOD research laboratories, test facilities, industry, and academia. The S&T community receives about $12 billion in funding each year to support activities ranging from basic research through advanced technology development that are conducted by the government or externally by universities and commercial industry. Once the S&T community has completed its technology development, additional product development activities, such as technology demonstration and testing, are often needed before incorporating the technologies into military weapon systems. Under the management of the acquisition community, product development further advances technology received from S&T developers and integrates it into systems that are ultimately delivered to support the warfighter (see figure 1). However, as we have reported in the past, for a variety of reasons DOD historically has experienced problems in transitioning technologies out of the S&T environment and into military systems. For example, technologies may not leave the laboratory because their potential has not been adequately demonstrated or recognized, acquisition programs may be unwilling to fund final stages of development, or private industry chooses to develop the technologies itself. DOD has a variety of technology transition programs managed by the Office of the Secretary of Defense (OSD) and the military departments that provide mechanisms and funding to facilitate technology transitions. The programs vary in size, approach, and funding, but most of them are intended to target fairly mature technologies that are suitable for the final stages of development and demonstration. Some, such as the Small Business Innovation Research (SBIR) program, have a stated purpose of using small businesses to meet federal research and development needs.feasibility and development projects, small businesses are expected to obtain funding from the private sector or government sources outside the SBIR program to commercialize or transition technologies for sale to the military or elsewhere. However, while the program provides funding for technology Congress established RIP as another mechanism, to include consideration of innovation research projects from small businesses, as they are often viewed as a key source for developing innovative technologies in areas of military need. Specifically, Congress directed DOD to establish RIP to accelerate the transition of technologies developed by small businesses participating in SBIR phase II projects; the defense laboratories; and other small or large businesses. The projects funded by RIP are to primarily support major defense acquisition programs, but also other defense acquisition programs that meet a critical national security need. In addition, the projects selected are to accelerate the fielding of technologies with the purpose of reducing acquisition or lifecycle costs; addressing technical risks; and improving test and evaluation outcomes. Furthermore, DOD was directed to develop a competitive, merit-based program that at a minimum provides for the use of a broad agency announcement or the use of any other competitive or merit-based processes for the solicitation of proposals; merit-based selection of the most promising cost-effective proposals for funding through contracts, cooperative agreements, and other transactions; a limit on funding for each RIP project not exceeding $3 million; and no project is funded under the program for more than 2 years. Within DOD, the Under Secretary of Defense for Acquisition, Technology, and Logistics issues overall program guidelines, and representatives appointed by the military service acquisition executives, Assistant Secretary of Defense for Research and Engineering (ASD/R&E), and the Director, Office of Small Business Programs have the responsibility for establishing the RIP processes that support DOD’s goal and meet the guidelines. The ASD/R&E is also responsible for coordinating RIP activities among the military departments and other defense components. In addition, ASD/R&E is responsible for preparing and submitting a report, at the end of the fiscal year, to the congressional defense committees that includes a list and description of each project funded, the amount of funding provided, and anticipated timeline for transition. The program is implemented and managed by each of the military departments and by ASD/R&E, which represents the other defense components that participate in the program. To date, DOD has not included RIP in its annual budget requests because there is no formal requirement within the department for the program, but Congress has appropriated funding each year. Congress has authorized RIP until September 30, 2015, when it will expire unless further legislative action is taken. DOD has established a competitive, merit-based solicitation process to select and award RIP contracts that address military needs. Each year, the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics begins by issuing broad guidance to the military departments and defense components for implementing a multi-phase process to identify military needs, solicit proposals from interested contractors, review and select projects, and award contracts. The process is somewhat lengthy, taking about 18 months to implement and award contracts, but interest from contractors has been high. Between fiscal year 2011 and 2015, the military departments and defense components received more than 11,000 white papers from interested contractors and will have awarded contracts for about 435 projects when the fiscal year 2014 solicitation is completed, with the vast majority going to small business. This high level of interest in the program has presented some administrative challenges to reducing the time from identification of need to contract award. DOD has established a multi-step process to solicit, evaluate, and select RIP projects to fund (see figure 2). Before the yearly RIP acquisition cycle begins, the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics issues implementation guidelines to the military departments and defense components which outline program goals and specific guidance on the program’s implementation and reporting requirements. For example, the guidelines include the funding available to the military departments and defense components; DOD-wide research areas, if applicable; guidance on how the solicitation process will be structured, such as whether a single solicitation will be used and the length of time it will be open; and the process for obtaining a waiver for projects expected to exceed the 24 month or $3 million limits. The RIP process begins with the military departments and defense components identifying the technology topics that will be addressed in the program. In addition to the 3 military departments, participation by defense components has increased, from 4 in fiscal year 2011 to 16 in fiscal year 2014. The topics are intended to reflect technology capability needs and requirements for weapon systems and acquisition programs, and are one of the criteria used in the selection of projects. The military departments and defense components have used different approaches to identify the technology research topics, but over time have moved to a more common approach. For example, while the Air Force and Navy have largely relied on their acquisition communities to identify technology needs, the Army relied primarily on its science and technology community to identify needs for the first 3 years of the program. For the Air Force and Navy, this entailed going directly to systems commands, Program Executive Offices, program managers, and others directly involved in the development and production of weapon systems to solicit needs. In relying on its science and technology community, the Army focused on S&T research areas that included more than the specific technology needs of acquisition programs. In fiscal year 2014, the Army expanded its approach and solicited topics from Program Executive Offices as well as from the science and technology community. The defense components used a variety of methods for identifying technology requirements. For example, the Missile Defense Agency develops research topics by reviewing technology road maps and prior SBIR research topics, and obtaining input from Program Executive Officers and program managers. The combatant command participants, such as the United States Special Operations Command (SOCOM), include operational capabilities linked to internal requirements or urgent needs. The military departments and defense components have used broad agency announcements (BAA) to solicit technical solutions that address technology topics and meet other submission requirements. The Federal Acquisition Regulation (FAR) provides that the selection of basic and applied research is a competitive procedure if an award results from (1) a broad agency announcement that is general in nature, identifying areas of research interest, including criteria for selecting proposals, and soliciting the participation of all offerors capable of satisfying the government’s needs; and (2) a peer or scientific review. The FAR also requires that BAAs specify the period of time for parties to respond, and contain instructions for preparing and submitting proposals. Also, we have found in a prior review that the use of competitive contracting procedures encourages firms to offer their best proposals when competing for work. The procedures used by the military departments and defense components for selecting RIP projects generally follow FAR competition guidelines because the BAAs identified general areas of research interest and criteria for selecting proposals; solicited the participation of all offerors capable of satisfying the government’s needs; and that subject matter experts will review project proposals. For fiscal years 2011 to 2013, the military departments and the ASD/R&E published four separate BAAs, although they used similar selection and evaluation criteria. In fiscal year 2014, a single BAA was used to solicit proposals because officials wanted to make it easier for businesses to participate in the program and reduce some of the internal overhead involved in having to maintain four separate BAAs within DOD. DOD has established a merit-based review process to select projects through a two-step approach—submission and review of brief white papers submitted by contractors, then comprehensive technical proposals from a subset of these contractors whose white papers are deemed most promising based on four criteria: Contribution to Requirement—the degree to which the technical approach is relevant to the proposed requirement; Technical Approach and Qualifications—the degree to which the technical approach is innovative, feasible, achievable, complete and supported by a technical team that has the expertise and experience to accomplish the proposed tasks; Schedule—the degree to which the proposed schedule is achievable within 24 months from contract award; and Cost—the degree to which the proposed cost or price is realistic for the proposed technical approach and does not exceed $3 million. According to RIP officials, this approach is viewed positively by small businesses because they do not have to invest in developing full proposals at the outset. Evaluation teams of subject matter experts, nominated by the military departments and defense activities, rate the white papers “go” or “no go” based on the criteria above. Greater consideration is given to the criteria pertaining to contribution to requirement and technical approach; if papers do not meet either of these two criteria they are not considered for further review. The contractors submitting white papers that are approved by the source selection authority for further consideration are then invited to submit a full proposal. According to a DOD official, more white papers are approved than can be funded because some technologies will not remain competitive when further details and support are provided in a full proposal. Once businesses are notified that their white papers have been selected, they have 30 days to submit full proposals that amplify the information in the white paper. The military departments and defense components assign subject matter evaluators to review the proposals using the same criteria as for the white paper evaluations. DOD uses a 5-level descriptive rating scale, ranging from outstanding to unacceptable, when evaluating the proposals against the criteria. As with the white paper evaluations, RIP gives greater importance to obtaining superior technical capabilities that will transition than in making awards at lower cost to the government. Selection preference is also given to small business proposals. Selection of large businesses is allowed, but only if their offers for the same requirement are superior to those of the small businesses. Once all proposals are evaluated and ranked, the source selection authority makes the final decisions on contract awards. The number of proposals selected for contract award will depend on the level of RIP funding appropriated and the proposed project costs. The businesses are notified of the results and the military departments and defense components then negotiate and complete all contracting and award procedures with the businesses. Contracts establish the cost, schedule, and key performance parameters, and deliverables required for projects. From fiscal years 2011 through 2013, the military departments and defense components awarded $760 million for 365 projects, with 89 percent of the awards going to small businesses. Program officials said that RIP attracted some new small businesses that had not worked with DOD previously. However, most of the projects selected (60-74 percent) had also received DOD funding through the SBIR program for earlier technology development activities. Table 1 presents an overview of the numbers of proposals, contract awards and dollar values, and small business participation rates for the fiscal year 2011 through 2014 project selection cycles. Projects funded by RIP cover a broad range of technologies, including software tools and applications, measurement and testing devices, the reconfiguration of existing technologies, and the development, demonstration, or prototyping of new technologies. Examples of projects include software to improve the rate and accuracy of the transmission of data, an effort to improve the manufacturing of a component used in thermal battery insulation technology to increase the reliability and life span of missile power sources, and a compact water hand pump and filter for purifying water on the battlefield. According to RIP officials, although the program is intended to target innovative technologies, the department’s process enables the defense components to identify requirements based on evolving operational needs and determine the kinds of projects to fund. Since its inception, RIP has received a high level of interest from businesses─more than 11,000 white papers submitted in 4 years─which has presented some program administrative challenges. In the first year, several of the military departments and defense components had to “scramble” to complete reviews and award the contracts before program funds expired, in part because they did not have sufficient infrastructure in place to administer the program. According to a RIP official, SOCOM received 800 of the 3,600 papers submitted that year and was overwhelmed due to the high volume of white papers. Since the first year, the components have made adjustments to try and administer the process more efficiently, such as the move towards issuing a single consolidated BAA to solicit proposals in fiscal year 2014. However, the time needed to prepare the RIP implementation guidelines; identify technology topics; prepare and execute the BAAs; and review and evaluate white papers as well as complete the other steps in the solicitation and project selection process has been lengthy in each acquisition cycle, taking about 18 months to complete (see figure 3). In 2014, RIP program officials set a qualitative goal to reduce cycle time for the fiscal year 2014 solicitation process, but said it is challenging due to the desire to employ the two-step solicitation process. For the fiscal 2015 cycle, they plan to release the BAA earlier than in previous cycles and have moved the milestone for completing this acquisition cycle forward about 30 days. The military departments and defense components used management practices and tools to manage RIP projects similar to those they use for other science and technology projects. Project managers and contracting officials review progress reports submitted by contractors and maintain regular communications with contractors to monitor whether projects are meeting cost, schedule, and performance requirements specified in the contracts. ASD/R&E established an annual DOD-wide reporting mechanism in 2013 to assess the technical performance of the projects, and most of them are meeting their technical performance metrics. The FAR requires that contract quality assurance be performed at such times and places as may be necessary to determine that the goods or services conform to the contract requirements. The RIP project contracts describe the deliverables, reporting schedules, and financial and technical reports that the project contractor must submit. For the 40 projects we reviewed, RIP project officials and contracting officer’s representatives— the technical subject matter expert—conducted a variety of activities to manage and monitor the status of RIP projects, including continued involvement throughout contract implementation to support contractors and help ensure that the contracted services would be delivered according to the schedule, cost, quality, and quantity specified in the contract. RIP project officials said that they manage and oversee projects in a similar fashion as other DOD technology development projects. In managing projects, officials said they review progress and financial reports submitted by the contractor, conduct system review meetings tailored to the size and complexity of the project, and engage in regular communications with contractors through e-mails, phone discussions, and occasional visits to a contractor’s facility. For example, on one project, the government team met weekly with a contractor for the first few months of the contract to review the contractor’s deliverables. In addition to the written reports, project officials also conducted quarterly program management reviews which involved an in-person meeting. A project official explained that working with small businesses presented unique challenges for oversight, because it requires the contractor to be educated on DOD requirements such as preliminary design reviews, critical design reviews, and acceptance of testing plans. Despite requiring these extra education efforts, officials said that small companies were very flexible and responsive to oversight activities. In addition to the project officials that help manage the project, the contracting officer also appointed a contracting officer’s representative to assess the contractor’s performance against contract performance standards and to record and report this information. ASD/R&E established an annual reporting mechanism in 2013 to assess the status and progress of RIP projects. Federal government internal control standards provide that effective and efficient control activities be established to enforce management directives; these include performance measures and indicators to compare against program goals We found that RIP officials within DOD are taking these and objectives.steps. ASD/R&E collects information from project officials on the contractor’s ability to meet key technical performance parameters, using the following three-level scale to assess the performance indicators for each project: Green—80 percent or more of the key performance parameters, goals, or thresholds will be met; Yellow—less than 80 percent of the key performance parameters, goals, or thresholds were being met; and Red—none of the key performance parameters, goals, or thresholds In the first in-process review, conducted in September 2013, ASD/R&E assessed the performance of the 175 ongoing projects that were funded in the fiscal year 2011 acquisition cycle. The review found that 86 percent of the projects were likely to meet 80 percent or more of their key performance parameters or goals. ASD/R&E conducted another in- process review in October 2014 and found that 85 percent of the fiscal year 2011 projects and 78 percent of the fiscal year 2012 projects were likely to meet 80 percent or more of their key performance parameters or goals (see figure 4). In addition to the ASD/R&E in-process review, some military departments and defense components conduct their own performance reviews. For example, the Air Force obtains semi-annual status reports from project managers. These status reports include information associated with technical risks that could negatively impact a project or its technologies and additional information such as accomplishments, planned actions, and information on projects’ transition strategies. The Navy requires its system commands’ chief technology officer or designee to semi-annually update the project execution plan for their projects. Project execution plans include a description of what constitutes a project’s transition, the criteria for test events, and funding. This information enables the Navy to maintain awareness of a project’s execution and to provide assistance with transition and eventual deployment to end users if needed. Similarly, the Missile Defense Agency and SOCOM also perform periodic reviews of their RIP projects. Some completed projects successfully transitioned to acquisition programs and other users, but opportunities may exist to improve overall RIP transition outcomes. DOD reported that half of all fiscal year 2011 projects (88 of 175) had funding commitments from military users, which indicates a likelihood they will transition. We also found that half of the fiscal year 2011 completed projects we reviewed (22 of 44) had technology that actually transitioned to an acquisition program, another military user, a prime contractor, or was commercialized. The majority of these projects had previously participated in the SBIR program. These transition rates are lower than what we found in our prior review of other DOD technology transition programs that reported transition rates ranging from about 55 to 85 percent.assess the overall success of RIP due to the limited number of completed projects and lack of established metrics to track whether projects have successfully transitioned to users. We also found several factors that can contribute to the transition success of RIP projects, such as user commitment and mature technology at the beginning of projects, which were not consistently emphasized by all the defense components in the program. As part of its annual in process review, DOD collects data from RIP project managers on the transition status of their projects. Based on the October 2014 review, DOD reported that half of fiscal year 2011 projects and 41 percent of fiscal year 2012 projects had out-year funding committed by a partner or user, which DOD uses as an indicator of the likelihood transition will occur after project completion (see figure 5). Funding is a marker that illustrates the transition partner’s level of commitment to transition the technology. A transition partner can be a program of record, a prime contractor, or a user that is willing to dedicate funding to mature a technology past the development stage. The transition status of fiscal year 2011 and 2012 projects varied across the military departments and defense components. At the time of DOD’s review, many of the fiscal year 2012 projects were ongoing and may be up to a year away from completion. As a result, transition indicators for fiscal year 2012 projects may not reflect final outcomes. As depicted in the figure, the Air Force and the Navy reported that most of their fiscal year 2011 projects have out-year funding committed, but the Army had significantly fewer projects with a transition funding commitment. According to an Army RIP official, this may be due in part to the Army’s selection of projects that addressed a broader set of science and technology needs rather than needs defined by the acquisition community, where partner funding commitments would be more likely. For projects that did not have a commitment from a transition partner, the military departments and defense components reported that in several cases, the expected follow-on acquisition, procurement or support funds were redirected to higher priorities or requirements for the technologies were cancelled while the projects were underway. To gain additional insights into the range of transition outcomes for RIP projects, we assessed all 52 fiscal year 2011 projects scheduled for completion through July 2014, to determine their transition results. We found that 44 of these projects had been completed, meaning the contract period of performance ended. Although projects are typically required to be completed within 2 years, several of them had encountered delays and obtained no-cost schedule extensions of 6 months or more from RIP officials to complete the remaining work. Officials told us extensions were granted for various reasons: completing additional testing; better aligning projects with acquisition program schedules; and further demonstrating project technology. Of the 44 projects that were completed, half had technology that successfully transitioned to an acquisition program, a military user, a prime contractor, or became a commercially available product. Table 2 depicts these outcomes, including the range of scenarios for projects that did not transition. The following are a few specific examples of RIP projects that successfully transitioned, demonstrating the array of technologies and the users supported by these technology transitions: Wireless Vibration Recorder: This Navy project provides a compact and lightweight measurement device that can acquire data quickly and easily on aircraft, reducing flight test costs and system development time. The device includes specialized software and sensors to acquire vibration and acceleration data for aircraft internal components to determine why and when components fail. The Navy currently utilizes a test instrumented aircraft to measure vibrations, which is time consuming and costly. RIP officials reported this technology is expected to save the Naval Air Systems Command $3 million to $5 million in the next 4 years in flight test costs. This technology is also available commercially. Enhanced Ground Moving Target Indicator (GMTI)-based Intelligence, Surveillance, and Reconnaissance: This Air Force project provides software applications to be used by joint surveillance target attack radar system operators. The applications are intended to interact with GMTI data to facilitate pattern recognition, automatically identify targets and perform analyses over any set period of time. For example, instead of a dot on the water, a target could be automatically identified as a carrier or a person in a rowboat so the analyst can determine if they need to send a camera to the area to investigate further. RIP officials reported the Air Force Research Laboratory has installed the software suite on a data repository that can be used by a wide variety of intelligence analysts. Multi-Missile Common Launch Tube: This United States Special Operations Command project increases the number of munitions that can be carried and launched from a single common launch tube. This technology doubles the number of targets that can be attacked and allows the use of smaller warheads with precision delivery, which can minimize collateral damage. RIP officials reported that ground and flight tests were successful, and the technology has been transitioned into the Command’s common launch tube program. Automated Intelligent Training with a Tactical Decision-Making Serious Game: This Army-funded project enhances a software tool for training Army officers in the classroom or field. The enhanced tool improves upon a capability to practice key leader cognitive skills, such as the ability to rapidly assess a dynamic situation, make sound decisions, and effectively direct subordinate units through scenario- based exercises. According to RIP officials, the software was delivered to end users at West Point and future Army leaders are using this software to improve their tactical command skills. Mine Roller Wheel Assembly Improvement: This project improves effectiveness of the Marine Corps mine roller wheel assembly over the legacy system which makes it more effective at neutralizing threats. A mine roller is used to detonate and clear certain classes of buried, pressure-activated explosive threats. This upgrade is expected to be less expensive than the cost of the original roller wheel assembly it is modifying while increasing its effectiveness at a greater range of speeds. In February 2015 the Marine Corps established an acquisition program known as the Wheelbank Suspension Upgrade Program to purchase the upgraded mine roller wheel assembly and is now working toward a production decision expected in the third quarter of fiscal year 2015. In contrast, we found several different reasons projects did not transition. DOD officials told us it may take a year or more to transition a project because project completion may not easily line up with user timelines or the DOD budget cycle. A little more than half had successfully demonstrated the technology as planned, but had no user commitment. According to RIP project managers, some of the projects provided value to DOD and may yet transition in the future. One of the Navy projects, for example, which provides a technology to filter out interference in certain military radios, has an interested transition partner, but has not yet transitioned due to funding constraints. Officials reported they were unable to secure funding in the fiscal year 2015 and 2016 budgets, but have submitted another request for fiscal year 2017. DOD officials also said that several projects failed to transition due to changing user priorities and requirements, such as the phase-out of ongoing missions in Iraq and Afghanistan. Further, according to DOD officials, other projects demonstrated a proof of concept or provided data to develop requirements, but were not planned to transition into a program of record. For example, one Army project delivered a prototype field kitchen that demonstrated potential capabilities, and the results will be used to inform future requirements for a program of record planned in 2019. Other RIP projects that did not transition needed further technology development or additional testing—such as software that needed security accreditation—before they could be integrated into a DOD program. For example, a Navy-funded project for a torpedo array nose assembly, designed to improve the array’s performance in shallow water, needs additional testing to complete the array qualification effort. If successful, this array nose assembly is expected to provide a second source at a lower cost than the current array. On the other hand, officials told us some of the projects did not meet user requirements or did not demonstrate that their technology was better than existing technology. For example, a SOCOM-funded project to develop an upgraded antenna that could operate in the presence of jamming did not meet a performance standard. Also, an Air Force project to test capabilities of a spectral flare showed the flare did not meet all of the requirements and did not offer significant improvement in performance compared to the current flare. Officials said that although projects may appear unsuccessful, in some cases they provided valuable information that could improve future decisions. In our past work on DOD technology transition activities, we found that several factors are important to successful transition, including the selection of projects that have sufficiently mature technologies and early endorsement from intended military users. In addition, once projects are selected, they require effective management processes to support technology transition. RIP officials and project managers that we interviewed also noted that these factors can contribute to successful transition. In general, however, at the DOD level there has not been an effort to understand the extent to which these factors may be contributing to differences in transition and to communicate lessons learned from military departments or defense components with a higher percentage of transition success. Without such an understanding, DOD may be missing opportunities to leverage practices from certain program components that could help improve overall transition success rates. GAO-13-286. to review the support for the technical approach presented in proposal submissions, including information on TRLs. We previously reported that early project endorsement from intended users and other key stakeholders in the acquisition community is important for project transition. RIP officials also agreed that having a transition partner, such as a specific program of record, identified prior to the beginning of the project contributes to transition success. Other RIP officials said user participation in ongoing project reviews further ensures the final product is aligned with user requirements. In the required reporting to Congress, all of the projects we reviewed identified an acquisition program or potential user as the project started; however, we found the level of user interest and commitment varied and in some cases changed during project execution. The military departments and defense components varied in the approach and level of support they used to manage RIP technology transition. According to an OSD-level RIP official, the Navy and the Air Force have more robust processes to support transition than the other components. For example, the Navy and the Air Force provide additional internal guidance for RIP participants that describe the roles and responsibilities of participants. These departments also require project reviews every 6 months until project completion and stress the importance of lessons learned for RIP project managers. For example, the Air Force Life Cycle Management Center’s guidance states that analysis of RIP outcomes at the conclusion of each project cycle can identify lessons learned from processes, successes, challenges, and recommendations that can help build the program knowledge base and establish best practices in program implementation and stakeholder relations. The Navy’s guidance describes the importance of quality control, consistent management, and continuous improvement to help avoid demonstrating a technology that has nowhere to transition. The Navy also uses project execution plans that describe transition plans, codify transition partner agreements, and describe criteria to evaluate seminal transition events. In addition, the Office of Naval Research’s risk management team can provide support for small businesses to stay on track in fulfilling RIP contracts, including making sure companies can ramp up production if their projects are transitioned. We have previously reported that this office has a well-established technology transition focus which may contribute to project success. Because of this, the Navy may be better aware of the benefits and obstacles associated with a substantial portion of their S&T portfolio. This knowledge can better inform investment decisions made by Navy leadership. While the program has established metrics to determine whether projects have a funding commitment from users and are therefore likely to transition, it does not track the degree to which completed projects have actually transitioned. RIP officials view transition success broadly, as a technology which is inserted into an acquisition program of record, incorporated into a weapon system manufacturing process, adopted for use by a depot or logistics center, or available for purchase on the General Services Administration federal supply schedule or in the commercial market. However, RIP does not formally track projects beyond completion and whether they are inserted into acquisition programs or used for other purposes, which limits DOD’s ability to know the final transition outcomes and whether any benefits were achieved. As we have found in the past, tracking and measuring technology transition and the impact of those transitions, such as cost savings or deployment of a technology into a weapon system, provides key feedback that can inform the management of programs such as RIP. We previously recommended the Secretary of Defense require all technology transition programs in the department to track and measure project outcomes— DOD including long term benefits for acquisition programs or users.generally agreed with our recommendations but we found no evidence that DOD has taken any action yet; this recommendation would also apply to RIP, as it is a technology transition program. In addition, DOD has not established an overall transition goal for RIP, so it is not clear what is expected in terms of success. Federal government internal control standards require that effective and efficient control activities be established which include performance measures and indicators to compare against program goals and objectives.DOD annually provides a report to Congress on the number and description of projects that are funded through RIP, the department is not required to provide information on transition results. Without this information, Congress lacks insight about the program’s performance which is important for conducting program oversight. In our discussions with DOD RIP officials, they estimated that about 50 percent or more of the RIP projects will transition when completed, although they said it is too early to accurately assess the overall effectiveness of the program. Some officials indicated that there are not enough projects completed and it could take two or more years after a project is completed before it is successfully transitioned and used. In addition, projects awarded in RIP’s first acquisition cycle may not accurately reflect program performance for the following years, in part because there were implementation challenges associated with starting a new program. Therefore, more data that captures final project outcomes and long-term program experience is required to accurately assess performance. In our prior review of DOD technology transition programs that provide structured mechanisms and funding to facilitate technology transition, we found that programs reported rates of technology transition ranging from about 55 to 85 percent. percent is at the lower end of this range, as is experience to date with the transition of completed projects. GAO-13-286. report program results to Congress, as this is not currently required. Consequently, without an overall transition performance target and better measure of outcomes, it is unclear whether RIP has been successful at transitioning innovative technologies and is worth continuing. We continue to believe that the recommendation we made in 2013—for DOD to track and measure the outcomes of its numerous technology transition programs in order to improve the visibility and management of these efforts—has merit and is applicable to RIP. In addition, although the military departments and defense components have implemented a structured process for soliciting projects, they do not always select projects that have a high likelihood of successfully transitioning, such as those with mature technologies or commitments from transition partners. DOD components are capitalizing on these success factors to varying degrees. The Air Force and Navy, in particular, incorporate them in RIP projects and, as a result, appear to be realizing significantly higher transition rates for their projects, even in the program’s early years. Without a more consistent focus on these factors during project selection, opportunities to achieve higher levels of transition success for RIP may remain limited. Congress should consider, if it decides to re-authorize RIP, requiring DOD to submit annual reports to Congress on the transition results of the program to improve accountability and transparency of the program. If Congress re-authorizes RIP then, to improve visibility and management of DOD’s ability to transition technologies through the program, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to take the following two actions: Establish an overall technology transition goal for RIP; and Identify and apply factors that contribute to the likelihood of technology transition success more consistently across the program. We provided a draft of this report to DOD for comment. In its written comments, DOD disagreed with our first recommendation and concurred with the second recommendation. DOD’s comments are reproduced in appendix II. DOD disagreed with our first recommendation, to establish an overall technology transition goal for RIP. The department stated that establishing a transition goal will impede the program’s objective of encouraging innovative technologies in defense programs. In addition, the department raised the concern that, since only a limited number of projects are complete, it is too soon to accurately assess the overall success of RIP and therefore, establishing a goal is not in the best interest of the program. Further, DOD said that in line with the Secretary’s release of the Defense Innovation Initiative in November 2014—a department initiative to pursue innovative ways to sustain and advance military superiority—it needs to maintain flexibility in RIP to address risky technical requirements that may not be mature enough to transition to acquisition programs, but may present opportunities for prototyping, experimentation, or innovative test and evaluation. However, the department said it would continue to measure and assess program transition results annually. We continue to believe that it is important for DOD to establish specific transition performance targets for RIP and assess the extent to which the program is successfully transitioning technologies to support acquisition programs. While we agree that flexibility may be necessary at times in RIP to address risky technologies that may not be mature enough for acquisition programs, the purpose of the program is to target innovative technologies that have the potential to support acquisition programs in the near term. We believe that there are many other technology development programs and activities within the department’s science and technology enterprise with broader objectives than RIP and more closely align with the goal of the Defense Innovation Initiative. Furthermore, as we pointed out in this report, DOD has historically experienced problems in transitioning technologies out of its science and technology enterprise and into acquisition programs, and RIP was established as one mechanism aimed at facilitating transition. Establishing a goal will not impede, but instead help focus efforts on meeting this objective. We recognize it is early in the program and that transition goals may need to be adjusted as the program matures. The department agreed on the need to identify and apply factors that contribute to the likelihood of technology transition success more consistently across the program. Their response identified several actions already taken, and indicated that if Congress reauthorizes RIP the department will continue to identify additional best practices that contribute to transition success. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Acquisition, Technology, and Logistics, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The objectives of this review were to assess the extent to which (1) the Department of Defense (DOD) has established a competitive and merit based process to solicit and award Rapid Innovation Program (RIP) contracts, (2) DOD has established practices to manage and oversee the execution of RIP projects, and (3) RIP is meeting its objective of rapidly inserting innovative technologies in defense acquisition programs. To assess how DOD solicited and awarded RIP projects, we examined DOD and military department policies and guidance on the program such as DOD’s annual implementation guidelines, any internal guidance developed by the military departments and defense agencies participating in the program, and other policy documents. For purposes of this report, we defined competitive awards as those using the competitive procedures listed in Federal Acquisition Regulation (FAR) Subpart 6.1 (Full and Open Competition). Also, we used FAR Part 35 (Research and Development Contracting) to identify the policies and procedures for using broad agency announcements. We analyzed the broad agency announcements issued between 2011 and 2014 by the military departments and for the defense components. For a sample of 40 projects awarded using fiscal year 2011 and 2012 funding, we examined source selection documents such as project white papers, proposals, and project scoring sheets used in the source selection process. We interviewed officials responsible for developing the guidance, RIP leads for each of the services and components, and those involved in the source selection process. In addition, we reviewed the Federal Acquisition Regulation and DOD Source Selection Procedures. The results from the selected 40 projects cannot be generalized to all RIP projects, but provide valuable insight. To assess whether DOD has established practices to manage and oversee the execution of RIP projects, we reviewed a nongeneralizable random sample of 40 RIP projects across DOD that were awarded with fiscal year 2011 and 2012 funding. The sample included 20 projects from each fiscal year, with 5 projects from each of the military departments and the defense agency component. For these projects, we reviewed project contracts, contractor reports, and agency project assessments and technical performance data. We also interviewed officials responsible for high-level project oversight and management such as program leads for the services and defense agency components in addition to contracting officers and contracting officer’s representatives responsible for day-to- day management activities. Further, we reviewed data from DOD’s in- process review on the technical performance of ongoing RIP projects. We did not independently assess the accuracy of technical performance data, but reviewed and discussed the data with DOD subject matter experts and determined it was sufficiently reliable for the purposes of this report. To determine whether the RIP is meeting its goal of rapidly inserting innovative technologies into acquisition programs, we reviewed available program monitoring information and assessed the transition status of all projects that were scheduled for completion by the end of July 2014, which included 52 projects that were awarded with fiscal year 2011 funding. For these projects, we interviewed RIP project officials and collected and analyzed documents outlining processes and procedures used to manage the projects and promote transition opportunities. In these interviews, we discussed whether projects successfully transitioned and factors that may have helped or hindered project execution and/or successful transition to defense acquisition programs or other military users. We also reviewed prior GAO studies on DOD technology transition and best practices for transition to identify what practices may facilitate technology transition. Further, we reviewed data from DOD in-process reviews on the status of RIP projects which provided information on the likelihood of fiscal year 2011 and 2012 projects transitioning. We assessed the accuracy of the transition performance data from DOD’s in- process review by comparing its results to the 52 projects that we examined. We determined the data from the in-process review was sufficiently reliable for the purposes of this report. We conducted this performance audit from April 2014 to May 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, John Oppenheim (Assistant Director), Marie P. Ahearn, LeAnna Parkey, Kenneth E. Patton, Mark F. Ramage, Jose Ramos, Robert Swierczek, and Oziel A. Trevino made key contributions to this report.
DOD relies on technology innovation to maintain superior weapon systems. However, a long-standing challenge has been ensuring that high value technologies are mature and available for military users. The Ike Skelton National Defense Authorization Act for Fiscal Year 2011 required DOD to establish RIP to facilitate the transition of innovative technologies into acquisition programs, and over $1.3 billion has been appropriated to the program since its inception. Senate Report No. 113-44 included a provision that GAO review the execution of RIP. This report examines the extent to which (1) DOD has established a competitive, merit-based process to award RIP contracts; (2) DOD has established practices to manage the execution of RIP projects; and (3) RIP is meeting its objective of rapidly inserting innovative technologies in defense acquisition programs. GAO reviewed RIP documentation, interviewed DOD and military department officials, and reviewed a nongeneralizable sample of 40 projects awarded with fiscal year 2011 and 2012 funding to assess DOD management practices as well as 52 fiscal year 2011 funded projects scheduled for completion through July 2014 to assess transition outcomes. The Department of Defense (DOD) has established a competitive, merit-based process to solicit proposals from interested contractors, review and select projects based on military needs and standardized evaluation criteria, and award contracts to execute Rapid Innovation Program (RIP) projects. To date, the process has been lengthy, taking about 18 months to implement, but interest from contractors has been high. Between fiscal years 2011 and 2015, the military services and defense agency components received more than 11,000 white papers on proposed technologies from contractors and will have awarded contracts for about 435 projects—primarily to small businesses—when the fiscal year 2014 solicitation is completed. The military services and defense components have practices and tools in place to manage and monitor the execution of projects, which are similar to those they use for other science and technology projects. For example, project officials review contractor reports, conduct system reviews, and engage in regular communications with contractors to determine the progress of projects. Also, DOD's 2014 annual review found that 85 percent of the fiscal year 2011 funded projects and 78 percent of the fiscal year 2012 funded projects were likely to meet 80 percent or more of their technical performance goals. Some completed projects have successfully transitioned technology to acquisition programs and other military users. DOD officials estimate that 50 percent of all fiscal year 2011 funded projects (88 of 175) have out-year funding commitments from military users, indicating the likelihood they will transition technologies. GAO assessed 44 projects completed in July 2014, and found that 50 percent successfully transitioned to acquisition programs or other users. However, it is too soon to accurately assess the overall success of RIP due to the limited number of completed projects as well as the lack of an overall program transition goal and effective metrics to track the degree to which projects have actually transitioned. GAO found that several factors can contribute to transition success of RIP projects, such as having military user commitment and mature technology when projects are started. However, DOD has not made an effort to understand how these factors may be contributing to differences in transition success from defense components with a higher rate of transition. GAO recommends that DOD establish a program transition goal, and identify and apply factors that lead to transition success. DOD disagreed on the need for a goal, stating it would impede RIP flexibility, but agreed to identify transition success factors. GAO believes having a goal would improve DOD's ability to transition technologies.
DOD’s Unified Command Plan sets forth basic guidance to all combatant commanders and establishes the missions, responsibilities, and areas of geographic responsibility among all the combatant commands. There are currently nine combatant commands—six geographic and three functional. The six geographic combatant commands have responsibilities for accomplishing military operations in defined areas of operation and have a distinct regional military focus. The three functional combatant commands operate worldwide across geographic boundaries and provide unique capabilities to the geographic combatant commands and the military services. In addition, each combatant command is supported by multiple service component commands that help provide and coordinate service-specific forces, such as units, detachments, organizations, and installations, to help fulfill the combatant commands’ current and future operational requirements. Figure 1 is a map of the headquarters locations of the functional combatant commands, to include their subordinate unified commands and their respective service component commands. Unless otherwise directed by the President or the Secretary of Defense, the combatant commanders organize the structure of their commands as they deem necessary to carry out assigned missions. The commands’ structure may include a principal staff officer; personal staff to the commander; a special staff group for technical, administrative, or tactical advice; and other staff groups that are responsible for managing personnel, ensuring the availability of intelligence, directing operations, coordinating logistics, preparing long-range or future plans, and integrating communications systems. The commands may also have liaisons or representatives from other DOD agencies and U.S. government organizations integrated into their staffs to help enhance the commands’ effectiveness in accomplishing their missions. While the commands generally conform to these organizational principles, there may be variations in a command’s structure based on its unique mission areas and responsibilities. Joint Publication 1, Doctrine for the Armed Forces of the United States. permanent authorized positions for military, civilian, and other personnel responsible for managing the day-to-day operations of the command. Title 10 of the U.S. Code assigns the Secretaries of the military departments responsibility for a variety of tasks specific to their respective forces, to include organizing, equipping, and training tasks. In addition to these service-specific tasks, Title 10 of the U.S. Code also assigns the Secretaries of the military departments responsibility for assisting the combatant commands, to include responsibility for assigning all forces under their jurisdiction to the combatant commands to perform missions assigned to those commands and responsibility for carrying out functions to fulfill current and future operational requirements of the combatant commands. In addition to service-specific tasks, DOD Directive 5100.03, Support of the Headquarters of Combatant and Subordinate Unified Commands, states that the military departments—as combatant command support agents—are responsible for programming, budgeting, and funding the administrative and logistical support of the headquarters of the combatant commands and subordinate unified commands. On an annual basis, each of the three military departments assess needs and develop a request for funding as part of their respective operation and maintenance budget justification to meet this requirement to support the combatant commands and subordinate unified commands. The directive assigns each military department responsibility for specific combatant commands and subordinate unified commands. As the combatant command support agent for the three functional combatant commands, the Air Force is responsible for allocating funding to combatant commands’ mission As areas, including the costs for civilian salaries, awards, and travel.such, the operating costs of these commands are generally subsumed within the Air Force’s budget and funded through operation and maintenance appropriations. Table 1 provides a listing of the functional combatant commands, their subordinate unified commands, and the military departments that support them. Some of the functional combatant commands also receive funding through appropriations other than operation and maintenance. Specifically, Title 10 of the U.S. Code gives the Commander, Special Operations Command, the authority to prepare and submit to the Secretary of Defense program recommendations and budget proposals for special operations forces and for other forces assigned to SOCOM through a separate major force program category. In addition, funding for TRANSCOM’s costs to support headquarters operations comes primarily from the Air Force’s Transportation Working Capital Fund, which is part of the Air Force Working Capital Fund. Unlike the other combatant commands, TRANSCOM is operated in a fee-for-service manner and charges its customers for transportation services. TRANSCOM and its service component commands bill the customers for services rendered, customers transfer funds into the Transportation Working Capital Fund to pay the bill, and TRANSCOM and its service component commands receive payment from the Transportation Working Capital Fund. Officials noted that one component of these transportation costs is overhead, to include pay for civilian personnel, material and supplies, equipment, travel, and facility operations. Our analysis shows that the functional combatant commands’ number of authorized positions has substantially increased since fiscal year 2004 primarily due to added missions and responsibilities. Taken together, the authorized number of military and civilian positions for the three functional combatant commands almost doubled from 5,731 in fiscal year 2004 to 10,515 in fiscal year 2013. All three functional combatant commands’ number of authorized positions grew. SOCOM’s authorized military and civilian positions more than doubled from 1,885 in fiscal year 2004 to 4,093 in fiscal year 2013. According to SOCOM officials, an increase in authorized positions at the Special Operations Research, Development, and Acquisition Center and the Joint Special Operations Command contributed to the growth at the command. TRANSCOM’s number of authorized military and civilian positions more than doubled from 863 in fiscal year 2004 to 1,956 in fiscal year 2013. According to TRANSCOM officials, the realignment of the Defense Courier Service from Air Mobility Command to TRANSCOM and the Joint Enabling Capabilities Command from the Joint Staff to TRANSCOM were the primary contributors to the increase in authorized positions at the command. STRATCOM’s authorized military and civilian positions increased by 50 percent from 2,983 in fiscal year 2004 to 4,466 in fiscal year 2013. During this period, the creation of new organizations at STRATCOM to fill additional mission requirements, including Joint Functional Component Commands (Integrated Missile Defense; Intelligence, Surveillance, and Reconnaissance; Global Strike; and Space), two Centers (U.S. Strategic Command Center for Combating Weapons of Mass Destruction and the Joint Warfare Analysis Center), and a subordinate unified command (United States Cyber Command), were the primary contributors to the increase in authorized positions at the command. Figure 2 shows the number of authorized military and civilian positions at the functional combatant commands. As with our findings in our May 2013 report on the geographic combatant commands, we found that, over time, the functional combatant commands have become more reliant on civilian personnel to meet their mission needs.civilian positions at the functional combatant commands almost tripled— from about 1,900 in fiscal year 2004 to about 5,200 in fiscal year 2013. According to DOD officials, the increase in authorized civilian positions is the result of attempts to rebalance workload and become a cost-efficient workforce, namely by converting positions filled by military personnel or in-sourcing services performed by contractors to civilian positions, and adding civilians in specific functional areas, such as intelligence and Specifically, we found that the number of authorized cyber, to support warfighter needs.authorized civilian positions at the functional combatant commands increased from about one-third of authorized positions in 2001 to about half in 2013. However, the functional combatant commands also increased the number of military personnel to support the headquarters. This is reflected in our analysis, which shows that from fiscal years 2004 through 2013 the number of authorized military positions also increased, by about 40 percent, from 3,850 to more than 5,300. Figure 3 shows changes in the functional combatant commands’ number of authorized military and civilian positions from fiscal years 2004 through 2013. Our analysis of data provided by the service component commands of the functional combatant commands showed that the service component command positions also increased, although not as much as in the functional combatant commands that they support. Specifically, total authorized military and civilian positions for the service component commands increased from about 6,675 in fiscal year 2002 to about 7,815 in fiscal year 2013. Changes in the authorized military and civilian positions at STRATCOM’s and SOCOM’s service component commands, as well as the establishment of new commands, drove the overall increase in total authorized positions, while the service component commands supporting TRANSCOM reduced their total authorized positions over the period. Among the military services, the Air Force’s service component commands saw the greatest increase in authorized positions, accounting for more than two thirds of the total increase in authorized military and civilian positions. This increase is primarily attributable to the establishment of Air Force Global Strike Command—a service component command to STRATCOM—which was activated in fiscal year 2009 to develop and provide combat-ready forces for nuclear deterrence and global strike operations, and is responsible for the nation’s intercontinental ballistic missile wings. The creation of Air Force Global Strike Command added over 330 additional military positions in fiscal year 2010. Figure 4 shows the increase in authorized positions at the service component commands that we reviewed. We found that the service component commands’ authorized military and civilian mix has changed slightly since fiscal year 2002. In fiscal year 2002 the mix was 55 percent civilian and 45 percent military, and in fiscal year 2013 the mix was about 60 percent civilian and 40 percent military. The availability of data on contractor full-time equivalents varied across the combatant commands, and thus trends in full-time equivalents were DOD officials stated the department generally tracks not identifiable.and reports expenditures for contract services, and that the combatant commands were not required to maintain historical data on the number of contractor personnel. As a result, we found that the combatant commands had taken varied steps to collect data on contractor full-time equivalents. We also found that the data on the number of personnel performing contract services at the service component commands varied or were unavailable, and thus trends could not be identified. We found that some service component commands do not maintain data on the number of personnel performing contract services, and others used different methods to track these personnel, for instance counting the number of contractors on hand or the number of identification badges issued. In recent years, Congress has enacted legislation to help improve DOD’s ability to manage its acquisition of services; to make more strategic decisions about the right workforce mix of military, civilian, and contractor personnel; and to better align resource needs through the budget process to achieve that mix. For example, Section 2330a of Title 10 of the U.S. Code requires DOD to annually compile and review an inventory of activities performed by contractors pursuant to contracts for services. Moreover, our work over the past decade on DOD’s contracting activities has noted the need for DOD to obtain better data on its contracted services and personnel to enable it to make more-informed management decisions, ensure department-wide goals and objectives are achieved, and have the resources to achieve desired outcomes. In response to our past work, DOD has outlined its approach to document contractor full- time equivalents and collect personnel data from contractors in DOD’s inventory of contract services. However, DOD does not expect to fully collect contractor personnel data until fiscal year 2016. Secretary of Defense, Combatant Command (COCOM) Civilian and Contractor Manpower Management. for these services in the fiscal year 2010 President’s Budget Request. Furthermore, the National Defense Authorization Act for Fiscal Year 2014 extends these limits into fiscal year 2014. Total costs to support headquarters operations at the three functional combatant commands we reviewed increased substantially from fiscal years 2001 to 2013. Our analysis of data provided by the commands shows that the costs to support headquarters operations—including costs for civilian pay, contract services, travel, and equipment —increased more than fourfold in constant fiscal year 2013 dollars, from about $296 million in fiscal year 2001 to more than $1.236 billion in fiscal year 2013. A primary driver for the growth in costs has been the increase in SOCOM’s costs to support headquarters operations, which increased more than sixfold, from about $75 million in fiscal year 2001 to almost $467 million in fiscal year 2013. Costs increased across SOCOM headquarters and subordinate organizations, to include the Special Operations Research, Development and Acquisition Center, the theater special operations commands, and the Joint Special Operations Also, STRATCOM’s costs to support headquarters Command.operations almost quadrupled, from about $164 million in fiscal year 2001 to almost $624 million in fiscal year 2013. This increase in STRATCOM’s costs was largely driven by the costs to establish and operate several new subordinate organizations—Joint Functional Component Commands and U.S. Cyber Command—which reflect new missions and responsibilities assigned to STRATCOM over time. TRANSCOM’s costs to support headquarters operations more than doubled, from about $56 million in fiscal year 2001 to $145 million in fiscal year 2013. Growth in civilian pay and purchased services and equipment drove the increases in TRANSCOM’s costs to support headquarters operations. In addition, the transfer of the Joint Enabling Capabilities Command from the Joint Staff in fiscal year 2012 added tens of millions of dollars to TRANSCOM’s headquarters support costs. Figure 5 shows the overall change in the costs to support headquarters operations at the three functional combatant commands that we reviewed for fiscal years 2001 and 2013. Total costs in constant fiscal year 2013 dollars to support headquarters operations increased slightly at the service component commands we reviewed, from about $614 million in fiscal year 2008 to about $657 million in fiscal year 2013.component commands saw the greatest increase in costs to support headquarters operations, primarily due to the establishment of Air Force Global Strike Command, which first reported costs in fiscal year 2010. Air Force costs were also driven by growth in costs for civilian pay associated with Air Force Space Command. In addition, SOCOM’s service component commands also experienced cost increases from fiscal years 2008 through 2013, primarily due to increases in costs for civilian pay. Conversely, TRANSCOM’s service component commands reduced their costs to support headquarters operations over the time period. Figure 6 shows the changes in the costs to support headquarters operations at the service component commands that we reviewed for fiscal years 2008 and 2013. In 2013, the Secretary of Defense set a target for reducing management headquarters budgets by 20 percent, but we found that DOD did not have an accurate accounting of the budgets and personnel associated with management headquarters to use as a starting point for reductions. Our work also found that management headquarters include about a quarter of the personnel at the commands—of the 10,500 authorized positions at the functional combatant commands, about 2,500 are considered to be management headquarters. As a result, about three-quarters of the authorized positions at the commands in our review are not included in the potential reductions. Moreover, without a clear and consistently applied starting point for reductions, it will be difficult for DOD to reliably track savings to management headquarters in the future. DOD relied on self-reported and potentially inconsistent data when implementing the Secretary of Defense’s planned headquarters reductions. In July 2013, the Deputy Secretary of Defense announced in a memorandum that the Secretary of Defense had directed reductions to DOD headquarters, to include the functional combatant commands, in an effort to streamline DOD’s management and eliminate lower-priority activities. The memorandum directed a 20 percent reduction to DOD components’ total management headquarters budgets for fiscal years 2014 through 2019, including costs for civilian personnel, contract services, facilities, information technology, and other costs that support headquarters functions. As outlined in budget documents, the targeted savings goal of 20 percent of headquarters operating budgets is to be realized by fiscal year 2019, with incremental savings each year beginning fiscal year 2015. DOD budget documents project the reductions will yield a total savings of about $5.3 billion over the period, with most savings coming in 2019. DOD began efforts to initiate reductions to headquarters during the development of the fiscal year 2015 President’s Budget. According to DOD officials, the department used the end of the 5-year defense plan within the fiscal year 2014 President’s Budget, or fiscal year 2018, as a point to base the targeted savings goal of 20 percent of headquarters budgets through fiscal year 2019, with costs adjusted for inflation. Officials told us that the parameters for these savings were established so the department could achieve already-planned savings from prior efficiencies. DOD officials noted that the Secretary of Defense provided general guidance as to what should be considered management headquarters for the commands to use when identifying their total headquarters budgets and directed commands to include only operation and maintenance funding. Because the department does not have complete and reliable information on the resources being devoted to management headquarters, officials noted that each individual component was asked to identify and self-report its management headquarters operating budget from which reductions would be based. DOD officials further noted that after the individual components determined what they considered their management headquarters budgets to be, officials from DOD’s Office of Cost Assessment and Program Evaluation reviewed the documentation to ensure it was reflective of the intent of the reductions. DOD focused its reductions on management headquarters, or major DOD headquarters activities, because, according to officials, doing so would ensure that the department saved as much money as possible for the warfighting elements. However, by relying on DOD components to self- report their total management headquarters budgets, DOD cannot ensure these self-reported budgets were captured consistently or reflect total headquarters costs among the commands. DOD officials reported that each of the combatant commands developed different approaches for identifying the population of resources on which to base reductions. Ultimately, officials determined that reductions would be based only on operation and maintenance funds. We also found that the underlying data are often tracked in an inconsistent manner. During the course of our review, we found that the functional combatant commands and service component commands had different ways of capturing total costs to support headquarters operations. All of the commands included costs for categories like civilian compensation, and travel and transportation expenses. However, some commands included categories such as software for command-specific systems as well as command-specific equipment. Based on discussions with DOD officials, it is unclear whether the commands were consistent in their approaches for identifying and self-reporting the information they provided. Our analysis shows that not all costs and authorized positions are included in DOD’s planned headquarters reductions and that all of the functional combatant commands exclude most of their authorized staffs from management headquarters totals. While the trends we described earlier in this report include the entire commands, we gathered information directly from the commands to determine how much of their organizations they considered management headquarters. We found that less than a quarter of their personnel are designated as management headquarters. This designation is critical because the Deputy Secretary of Defense’s July 2013 memorandum directed a 20 percent reduction to DOD components’ total management headquarters budgets for fiscal years 2014 through 2019, including costs for civilian personnel, contract services, facilities, information technology, and other costs that support headquarters functions. In addition, the memorandum noted that organizations should strive for a goal of 20 percent reductions in authorized government civilian staff at the headquarters as well as a goal of 20 percent reductions in military personnel billets on management headquarters staffs. On the basis of our analysis of data on authorized positions at the functional combatant commands and their service component commands, we found that the commands designate less than a quarter of the total authorized positions as part of their management headquarters functions. Specifically, about 2,500 of about 10,500 total authorized positions are accounted for in the functional commands’ management headquarters position totals. As a result, a 20 percent reduction to management headquarters could result in a relatively small cut to the total number of positions at the three functional combatant commands. Specifically, as figure 7 shows, a 20 percent reduction to management headquarters positions means that, combined, the functional combatant commands would have to eliminate about 500 positions, or less than 5 percent of the overall authorized positions at the commands. Compared to the functional combatant commands, the service component commands have a larger percentage of authorized positions included in their management headquarters totals. Based on our analysis of data on authorized positions at the service component commands, about 6,000 of about 7,800 total authorized positions are accounted for in the management headquarters position totals (77 percent). As such, a 20 percent reduction to management headquarters positions means that, combined, the service component commands would have to eliminate about 1,200 positions, or about 15 percent of the overall authorized positions at the commands. This reduction is larger than the 5 percent reduction that would be required at the functional combatant commands if the reductions are based on management headquarters totals, as DOD calculated them. We further analyzed data provided by the individual functional combatant commands to determine what they included in their management headquarters totals. The proportion of personnel considered to be in management headquarters positions at the three functional combatant commands ranged from 20 to 28 percent of their overall personnel based on their 2013 authorized positions. DOD officials explained that the positions that are not accounted for in management headquarters positions perform more operationally focused tasks and functions. For example, STRATCOM officials stated that positions within the command’s Joint Functional Component Commands for Global Strike; Intelligence, Surveillance, and Reconnaissance; Integrated Missile Defense; and Space are not included in the command’s management headquarters totals because the positions within these components and the missions they perform are operational in nature. Similarly, TRANSCOM officials told us that because the Joint Enabling Capabilities Command is a deployable operational command component, all authorized positions within this component are excluded from the command’s management headquarters totals. Moreover, officials at each of the functional combatant commands told us that some authorized positions within their headquarters—such as those within the intelligence and operations directorates—are not considered management headquarters positions because these positions perform operational tasks and functions. Examples of the positions within the individual commands that are included in management headquarters and those that are not are as follows: About 20 percent of SOCOM’s reported total authorized positions, or about 800 of 4,100 authorized positions, are included in the command’s management headquarters position totals. According to officials, positions not included in management headquarters functions include 267 authorized positions in the Special Operations Research, Development, and Acquisition Center; the entirety of the Joint Special Operations Command headquarters; and certain positions within command directorates such as intelligence and operations. For example, only 1 of the 473 authorized military and civilian positions in the intelligence directorate is included in the command’s management headquarters position totals. About 28 percent of STRATCOM’s reported total authorized positions, or about 1,200 of 4,500 total positions, are included in the command’s management headquarters position totals. According to officials, positions not counted as part of the command’s management headquarters totals include 1,544 authorized positions in the command’s Joint Functional Component Commands; 918 authorized positions at U.S. Cyber Command; and 64 authorized positions in the command’s J9 Mission Assessment and Analysis Directorate. About 24 percent of TRANSCOM’s reported total authorized positions, or about 500 of 2,000 positions, are included in the command’s management headquarters position totals. According to officials, positions that are excluded from this total include about 260 authorized positions that are part of the Defense Courier Service, among others. Figure 8 shows the number of authorized positions within the functional combatant commands in fiscal year 2013 included in management headquarters totals and the number not included. While the functional combatant commands have components and associated positions that are more operational in nature, we found that each of these operational components have personnel that perform management headquarters functions, such as conducting planning, budgeting, and developing policies. For example, STRATCOM officials noted that the Joint Functional Component Commands have resource- management personnel that manage the component commands’ funding, which is a headquarters function, as defined in DOD Instruction 5100.73, Major DOD Headquarters Activities. However, STRATCOM excludes these personnel from its management headquarters totals because, according to STRATCOM officials, the headquarters directorates at these component commands are relatively small and personnel rely heavily on STRATCOM for support in these functional areas. Moreover, a majority of the positions in SOCOM’s research, development, and acquisition center, which manages and supports the development, acquisition, and fielding of critical items for special operations forces, are not included in the command’s management headquarters totals even though these personnel perform headquarters-specific functions as defined in DOD Instruction 5100.73, Major DOD Headquarters Activities. DOD’s reduction initiatives targeted management headquarters, and the department reported a savings estimate from these reductions in its fiscal year 2015 budget submission, but because the department did not have a reliable way to determine the resources being devoted to such headquarters as a starting point, actual savings will be difficult to track. Specifically, DOD officials acknowledged the limitations of management headquarters data, stating that they did not have an operationally clear definition and good data about what constituted management headquarters, and agreed that this made it difficult to establish a starting point for reductions. However, the department does not have any plans to reevaluate the baseline on which the reductions are based, in part because it does not have an alternative source for complete and reliable data. Moreover, DOD reported in its fiscal year 2015 budget submission that reductions to management headquarters staffs will result in a savings of $5.3 billion through fiscal year 2019 in comparison to DOD’s overall expected $2.7 trillion budget over those fiscal years. The department based this total on incremental savings from reductions being realized each year from fiscal years 2015 through 2019. However, this represents a savings of about 2/10 of 1 percent. If DOD’s headquarters reductions do not have a clearly defined and consistently applied starting point on which to target savings—and reductions are only focused on what the commands have self-reported as management headquarters activities— then the department may not be able to track its savings to management headquarters or assure that reductions are achieved as intended. Accounting for management headquarters is a long-standing challenge for DOD that has created problems before in tracking savings. In October 1997, in the wake of the mid-1990s military drawdown, we found that total personnel and costs of defense headquarters were significantly higher than were being reported. At the time, we found that about three-fourths of subordinate organizations excluded from the management headquarters accounting were actually performing management or management support functions and that such accounting masked the true size of DOD’s headquarters organizations. In March 2012, we concluded that DOD’s data on its headquarters personnel lacked completeness and reliability necessary for use in making efficiency assessments and decisions. We recommended that the Secretary of Defense revise DOD Instruction 5100.73, Major DOD Headquarters Activities, to include all headquarters organizations; specify how contractors performing headquarters functions will be identified and included in headquarters reporting; clarify how components are to compile the information needed for headquarters-reporting requirements; and establish time frames for implementing actions to improve tracking and reporting of headquarters resources. DOD generally concurred with the findings and recommendations in that report and is taking steps to address the recommendations. For example, DOD has begun the process of updating DOD Instruction 5100.73, Major DOD Headquarters Activities, to include all major DOD headquarters activity organizations. However, the department has not yet taken actions to fully address our recommendations. In April 2003, we testified on the importance of periodically reexamining whether current programs and activities remain relevant, appropriate, and effective in an agency’s ability to deliver on its mission and noted that restructuring efforts must be focused on clear goals. Moreover, key questions that agencies should consider when evaluating organizational consolidation note that the key to any consolidation initiative is the identification of and agreement on specific goals, with the goals of the consolidation being evaluated against a realistic assessment of how the consolidation can achieve them. We further noted that any consolidation initiatives must be grounded in accurate and reliable data. Moreover, unless these issues are addressed, the department may be unable to convince external stakeholders, such as Congress, that its actions will address overhead in a meaningful way. Section 904 of the National Defense Authorization Act for Fiscal Year 2014 requires that DOD develop and submit a plan for streamlining management headquarters, to include the combatant commands, by June 2014.plan is to include a description of the planned changes or reductions in staffing and services and the estimated cumulative savings to be achieved from fiscal years 2015 through 2024. According to officials, DOD has not yet decided how it plans to track its management headquarters reductions and report to Congress to satisfy this statutory requirement. Without establishing a starting point for reductions that includes all headquarters personnel and resources within these commands, it is unclear how the department will provide reliable information to Congress. Moreover, since the universe of resources that DOD has identified for headquarters reductions is relatively small compared to the overall size of the functional combatant commands, unless the department reevaluates its decision to base reductions on management headquarters, it may not ultimately realize significant savings. As it faces a potentially extended period of fiscal constraints, DOD has concluded that reducing the resources it devotes to headquarters is a reasonable area to achieve cost savings. However, by focusing the reductions on management headquarters budgets and personnel—which tend to be inconsistently defined and often represent a small portion of the overall headquarters—in the way it has, the department is, in effect, shielding much of the resources it directs to headquarters organizations like those of the functional combatant commands. Unless the department reevaluates its decision to focus reductions on management headquarters and sets a clearly defined and consistently applied starting point from which to base budgetary and staff reductions at headquarters organizations and track them, DOD is likely to face difficulties ensuring that its actions result in significant overhead savings. As the department continues to identify efficiencies in its operations and potential reductions in overhead, accurately identifying the universe of resources that DOD dedicates to headquarters and using that as a starting point for reductions would help DOD ensure it achieves savings. Finally, managing headquarters resources is a long-standing challenge at DOD. We have previously recommended that DOD improve the accounting of management headquarters functions and develop a process to periodically revalidate the size and structure of the combatant commands. We believe that these recommendations apply to the functional combatant commands in this review, and are still valid, and thus are making no new recommendations on these issues. In order to improve the management of DOD’s headquarters-reduction efforts, we recommend that the Secretary of Defense take the following three actions: 1. Reevaluate the decision to focus reductions on management headquarters to ensure the department’s efforts ultimately result in meaningful savings. 2. Set a clearly defined and consistently applied starting point as a baseline for the reductions. 3. Track reductions against the baselines in order to provide reliable accounting of savings and reporting to Congress. We provided a draft of this report to DOD for review and comment. In written comments on a draft of this report, DOD partially concurred with the first recommendation and concurred with the second and third recommendations. DOD’s comments are summarized below and reprinted in their entirety in appendix VIII. DOD partially concurred with the first recommendation that the Secretary of Defense reevaluate the decision to focus reductions on management headquarters to ensure the department’s efforts ultimately result in meaningful savings. DOD stated that this department-wide recommendation would garner greater savings. However, DOD officials raised concerns that the recommendation seemed to be outside the scope of the review, which focused on the functional combatant commands, and with our distinction between management headquarters and below-the-line organizations and the functions that personnel in these positions perform. We agree that the recommendation has implications beyond the functional combatant commands. While our review was focused on the functional combatant commands, the issue we identified is not limited to these commands. The findings related to the three functional combatant commands illustrate a fundamental challenge facing the department in its efforts to reduce headquarters overhead. As discussed in this report, we have previously reported on problems with the way that DOD accounts for management headquarters across the department. As part of this review, we found that DOD did not have an accurate accounting of the budgets and personnel associated with management headquarters to use as a starting point for reductions. Given the longstanding issues with accounting for management headquarters, we believe the recommendation that the Secretary reevaluate the decision to focus the department’s reduction efforts on management headquarters is appropriate. The department also stated in its letter that while the Secretary of Defense’s reductions were focused on management headquarters, the military services were allowed to reduce below-the-line organizations— those not designated as management headquarters—which includes elements of the combatant commands. According to DOD, these reductions will range from 3 percent up to 15 percent depending on the military service. We recognize that the military services plan to implement reductions that may result in savings at some non-management headquarters organizations across the department. However, the department’s response did not delineate how these reductions were determined, how they were applied by the military services, or how much the actions would ultimately save the department. Further, in discussions, DOD officials raised concerns about our distinction between management headquarters and below-the-line organizations and the functions that personnel in these positions perform. As noted in the report, we understand the distinctions as defined in guidance between management headquarters and below-the-line organizations, or those not considered to be part of an organization’s management headquarters. However, our analysis focused on the extent to which the functional combatant commands reported that personnel were performing management functions, and we noted that this data was self-reported and potentially inconsistent. The intent of the recommendation was to focus on positions not included in the commands’ assessment of management headquarters positions, but in response to DOD’s concern, we modified the recommendation to clarify that the goal of ensuring cost savings was related to the assessment of personnel performing management functions versus the definition of people included in management headquarters totals. In its comments, the department also questioned the use of Joint Table of Distribution data from DOD’s Electronic Joint Manpower and Personnel System versus the Future Years Defense Program data in DOD’s Defense Resources Data Warehouse for this study. We attempted to use the data warehouse as part of our prior work examining the resources devoted to the geographic combatant commands in May 2013; however, after querying the database, Joint Staff officials explained that the data it yielded were unreliable. Therefore, the officials suggested that we obtain the authorized position and costs to support headquarters operations data directly from the combatant commands and service component commands. DOD noted in its comments that the data trends may be skewed since the Joint Table of Distribution presents a point-in-time look at on-hand personnel. For this review, we focused on authorized positions as documented on the combatant command’s Joint Tables of Distribution rather than focusing on personnel on hand because data on authorized positions provided the most accurate and repeatable data to identify trends. Moreover, the Chairman of the Joint Chiefs of Staff Instruction 1001.01A, Joint Manpower and Personnel Program, notes that Joint Tables of Distribution are documented in the Joint Staff’s Electronic Joint Manpower and Personnel System which is DOD’s system of record to document the unified combatant commands’ organizational structure and track the manpower and personnel required to meet the combatant commands’ assigned missions. The instruction also states that manpower authorizations on the Joint Table of Distribution should be compared with the Future Years Defense Program and any disconnects must be resolved. Finally, in January 2012, the Vice Director of the Joint Staff issued a memo identifying its Electronic Joint Manpower and Personnel System as the authoritative data source for DOD and for congressional inquiries of joint personnel, stating that the system must accurately reflect the manpower and personnel allocated to joint organizations, such as the combatant commands, to provide senior leaders with the necessary data to support decision making in a fiscally constrained environment. Therefore, we maintain that the use of authorized military and civilian positions from the combatant command’s Joint Tables of Distribution, as documented in DOD’s Electronic Joint Manpower and Personnel System, was appropriate for our review of the resources devoted to the functional combatant commands. DOD concurred with the second and third recommendations that the Secretary of Defense: set a clearly defined and consistently applied starting point as a baseline for the reductions, and track reductions against the baselines in order to provide reliable accounting of savings and reporting to Congress. In its response, DOD recommended the use of the Future Years Defense Program data to set the baseline going forward and stated that it was enhancing data elements within DOD’s Resource Data Warehouse to better identify management headquarters resources to facilitate tracking and reporting across the department. We agree that enhancements to the data elements will increase DOD’s capability to track and report management headquarters resources across the department and, thus, the Future Years Defense Program data could be used to set baselines and track future reductions. These enhancements, if implemented, would address the intent of the recommendations. DOD also provided what it considered significant points of fact. However, we considered these were technical comments, which we incorporated as appropriate. We are sending a copy of this report to the appropriate congressional committees, the Secretary of Defense, the Chairman of the Joint Chiefs of Staff, and the Secretaries of the military departments. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3489 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IX. We conducted this work in response to a mandate from the House Armed Services Committee to review the personnel and resources of the functional combatant commands. This report (1) identifies any trends in resources devoted to the functional combatant commands and their service component commands for fiscal years 2001 through 2013 to meet their assigned missions and responsibilities, and (2) evaluates the extent to which the Department of Defense’s (DOD) directed reductions to headquarters, like the functional combatant commands and supporting service component commands, could result in cost savings for the department. To conduct this work and address our objectives, we identified sources of information within DOD that would provide data on the resources devoted to the functional combatant commands—U.S. Special Operations Command (SOCOM), U.S. Strategic Command (STRATCOM), and U.S. Transportation Command (TRANSCOM)—to include their subordinate unified commands and their corresponding service components commands. To identify trends in the resources devoted to DOD’s functional combatant commands, including their subordinate unified commands and their service component commands, we obtained and analyzed data on available authorized military and civilian positions, and operation and maintenance obligations, from each of the commands and their corresponding service component commands from fiscal years 2001 through 2013. We focused our review on authorized positions, as these reflect the approved, funded manpower requirements at each of the functional combatant commands. To provide insight into the number of personnel assigned to each command, we obtained data on actual assigned personnel for fiscal year 2013. We also obtained and analyzed available data on contractors assigned to the commands, but based on the availability of data, we were not able to identify trends in contractors assigned to the individual commands. Our review also focused on operation and maintenance obligations—because these obligations reflect the primary costs to support headquarters operations of the combatant commands, their subordinate unified commands and other activities, and corresponding service component commands—including the costs for civilian personnel, contract services, travel, and equipment, among others. We included funds provided to TRANSCOM through the Transportation Working Capital Fund and to SOCOM through the command’s special operations–specific appropriations that provides funding for necessary special operations forces’ unique capabilities and items since these funds most appropriately reflect the primary mission and headquarters support for the commands. Our review excluded obligations of operation and maintenance funding for DOD’s overseas contingency operations not part of DOD’s base budget. Unless otherwise noted, we reported all costs in this report in constant fiscal year 2013 dollars. Since historical data were unavailable in some cases, we limited our review of the combatant commands authorized positions to fiscal years 2004 through 2013, and authorized military and civilian positions at the service component commands to fiscal years 2002 through 2013. Using available data, we provided an analysis of trends in operation and maintenance obligations at the combatant commands for fiscal years 2001 through 2013, but since historical data were unavailable in some cases for the service component commands, we limited our analysis of trends to fiscal years 2008 through 2013. We obtained data on actual assigned personnel for fiscal year 2013. To assess the reliability of the data, we interviewed DOD officials and analyzed relevant manpower and financial-management documentation to ensure that the authorized positions and data on operation and maintenance obligations that the commands provided were tied to mission and headquarters support. We also incorporated data-reliability questions into our data-collection instruments and compared the multiple data sets received from DOD components against each other to ensure that there was consistency in the data that the commands provided. We determined the data were sufficiently reliable for our purposes. To determine the extent to which DOD’s directed reductions to headquarters, like the functional combatant commands and supporting service component commands, will result in cost savings for the department, we obtained and reviewed guidance and documentation on DOD’s and the services’ planned headquarters reductions, such as the department-issued memorandum outlining the reductions and various DOD budget-related data and documents. We then examined whether this information addressed some key questions we had developed for an agency to consider when evaluating proposals to consolidate management functions. We developed these key questions by reviewing our reports on specific consolidation initiatives that have been undertaken, complementing this with information gathered through a review of the relevant literature on public-sector consolidations produced by academic institutions, professional associations, think tanks, news outlets, and various other organizations. In addition, as illustrative examples for this prior work, we reviewed selected consolidation initiatives at the federal agency level and interviewed a number of individuals selected for their expertise in public management and government reform. We obtained data on the total authorized positions at the functional combatant commands for fiscal year 2013 as well as the number of positions deemed by the command to be performing headquarters functions and included in DOD’s planned reductions. To assess the reliability of the data, we interviewed DOD officials, analyzed relevant manpower documentation, incorporated data-reliability questions into our data-collection instruments, and compared the multiple data sets received from DOD components against each other to ensure that there was consistency in the data that the commands provided. We determined the data were sufficiently reliable for our purposes. We also interviewed officials at the functional combatant commands, some of their respective subordinate unified commands, and the service component commands to discuss specific headquarters positions and organizations that will be affected by DOD’s planned reductions in the commands and their service components. We interviewed officials or, where appropriate, obtained documentation from the organizations listed below: Manpower and Personnel Directorate Strategic Plans and Policy Directorate Department of the Air Force Office of the Secretary of the Air Force, Manpower and Personnel Assistant Secretary of the Army for Financial Management and Comptroller, Army Budget Office U.S. Army Force Management Support Agency Headquarters, U.S. Marine Corps Unified Combatant Commands and Subordinate Unified Commands Marine Corps Forces U.S. Strategic Command U.S. Army Space and Missile Defense Command / Army Forces Joint Functional Component Command for Integrated Missile Defense Air Force Global Strike Command Air Force Space Command U.S. Special Operations Command U.S. Army Special Operations Command Air Force Special Operations Command Naval Special Warfare Command U.S. Marine Corps Forces Special Operations Command Special Operations Command Central Surface Deployment and Distribution Command We conducted this performance audit from May 2013 to June 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix contains information in noninteractive format presented in figure 1. U.S. SPECIAL OPERATIONS COMMAND (SOCOM) Mission: Provides special operations forces to defend the United States and its interests, and synchronizes planning of and operations against global terrorist networks. Headquarters: MacDill Air Force Base, Florida Responsibility: Daily planning for and execution of SOCOM’s mission is performed by the command headquarters, three subordinate unified commands—Joint Special Operations Command, Special Operations Command–North, and Special Operations Command–Joint Capabilities—and two direct reporting units—Special Operations Joint Task Force and a Regional Special Operations Forces Coordination Center. SOCOM is also supported by four service component commands, which are the U.S. Army Special Operations Command; the Naval Special Warfare Command; the Air Force Special Operations Command; and the Marine Corps Special Operations Command. Interactivity instructions: Click on the combatant command name to see more information. See appendix VII for the noninteractive, printer-friendly version. 5% Theater Special Operations Command (23) 15% Joint Special Operations Command (71) 80% SOCOM Headquarters (373) 34% Air Force Special Operations Command (710) 12% Air Force Special Operations Command (29.4) 7% Marine Corps Special Operations Command (16.2) 14% Naval Special Warfare Command (33.2) 67% Army Special Operations Command (159.7) 14% Marine Corps Special Operations Command (304) U.S. STRATEGIC COMMAND (STRATCOM) Mission: STRATCOM conducts global operations in coordination with other combatant commands, military services, and appropriate U.S. government agencies to deter and detect strategic attacks against the United States and its allies. Headquarters: Offutt Air Force Base, Nebraska Responsibility: Daily planning and execution for STRATCOM’s mission is performed by one subunified command—U.S. Cyber Command—and seven component commands—Joint Functional Component Command-Global Strike; Joint Functional Component Command-Space; Joint Functional Component Command-Integrated Missile Defense; Joint Functional Component Command-Intelligence, Surveillance, and Reconnaissance; the STRATCOM Center for Combating Weapons of Mass Destruction, the Standing Joint Force Headquarters for Elimination, and the Joint Warfare Analysis Center. STRATCOM is also supported by four service component commands, including Air Force Global Strike Command, Air Force Space Command, Army Space and Missile Defense Command / U.S. Army Forces Strategic Command, and Marine Corps Forces U.S. Strategic Command. Interactivity instructions: Click on the combatant command name to see more information. See appendix VII for the noninteractive, printer-friendly version. 29% Cyber Command (180.9) 44% STRATCOM Headquarters (275.0) 24% Component Commands (167.5) 28% Air Force Global Strike Command (991) 22% Army Forces Strategic Command (28.5) 1% Marine Corps Forces, Strategic Command (0.45) 31% Air Force Global Strike Command (40.6) 38% Air Force Space Command (1,363) 47% Air Force Space Command (60.1) U.S. TRANSPORTATION COMMAND (TRANSCOM) Mission: TRANSCOM provides air, land, and sea transportation for the Department of Defense (DOD) in time of peace and war, with a primary focus on wartime readiness. Headquarters: Scott Air Force Base, Illinois Responsibility: TRANSCOM provides air, land, and sea transportation for DOD and is the manager of the DOD Transportation System, which relies on military and commercial resources to support DOD’s transportation needs. The command, among other responsibilities, provides commercial air, land, and sea transportation; terminal management; aerial refueling to support the global deployment, employment, sustainment, and redeployment of U.S. forces; and is responsible for movement of DOD medical patients. TRANSCOM has one subordinate command—the Joint Enabling Capabilities Command. TRANSCOM works with three component commands to accomplish its joint mission: Air Mobility Command, Military Sealift Command, and Surface Deployment and Distribution Command. Interactivity instructions: Click on the combatant command name to see more information. See appendix VII for the noninteractive, printer-friendly version. 27% Joint Enabling Capabilities Command (38.9) 73% TRANSCOM Headquarters (106.3) 54% Air Mobility Command (1,145) 36% Surface Deployment and Distribution Command (104.7) 39% Air Mobility Command (112.0) 25% Military Sealift Command (72.1) This appendix contains information in noninteractive format presented in the organizational charts in appendixes III, IV, and V. In addition to the contact named above, key contributors to this report include Richard K. Geiger (Assistant Director), Tracy Barnes, Robert B. Brown, Tobin J. McMurdie, Carol D. Petersen, Michael Silver, Amie Steele, Erik Wilkins-McKee, and Kristy Williams.
DOD operates three functional combatant commands, which provide special operations, strategic forces, and transportation. GAO was mandated to review personnel and resources of these commands in light of plans announced by DOD to reduce headquarters. This report (1) identifies the trends in resources devoted to the functional combatant commands and their service component commands and (2) evaluates the extent to which DOD's reductions to headquarters could result in cost savings. GAO analyzed data for fiscal years 2001 through 2013 on authorized positions and costs to support headquarters operations for the functional combatant commands and their service component commands. GAO also obtained documentation such as guidance and budget documents and interviewed officials regarding the commands' approach for implementing reductions to headquarters. GAO analysis of the resources devoted to the Department of Defense's (DOD) functional combatant commands shows substantial increases in authorized positions and costs to support headquarters operations. Specifically, the number of authorized positions across the commands grew from 5,731 in fiscal year 2004 to 10,515 in fiscal year 2013. According to DOD officials, recent and emerging missions have driven up demands at all three functional combatant commands and driven the growth in authorized personnel. In addition, costs to support headquarters operations also increased substantially at the functional combatant commands. Data, in constant fiscal year 2013 dollars, show that the combined costs to support headquarters operations for the commands increased from about $296 million in fiscal year 2001 to more than $1.236 billion in fiscal year 2013. Authorized positions and costs to support headquarters operations at the service component commands supporting the functional combatant commands also increased. Specifically, authorized positions grew from about 6,675 in fiscal year 2002 to about 7,815 in fiscal year 2013, and costs to support headquarters operations increased from about $614 million in fiscal year 2008 to about $657 million in fiscal year 2013. DOD's directed reductions to headquarters do not include all resources at the commands, which may affect DOD's ability to achieve significant savings in headquarters operations. In 2013, DOD directed reductions to management headquarters resources in an effort to streamline the department's management. However, GAO found that the department did not have a clear or accurate accounting of the resources being devoted to management headquarters to use as a starting point to track reductions. Officials noted that DOD relied on data self-reported by the commands, and GAO found that these data were potentially inconsistent and did not include the totality of headquarters resources. Specifically, GAO found that less than a quarter of the positions at the functional combatant commands are considered to be management headquarters even though many positions appear to be performing management headquarters functions such as planning, budgeting, and developing policies. As such, more than three quarters of the headquarters positions at the functional combatant commands are potentially excluded from DOD's directed reductions. However, the department does not have any plans to reevaluate the baseline on which the reductions are based, in part because it does not have an alternative source for complete and reliable data. GAO has also concluded that restructuring efforts must be focused on clear goals and consolidation initiatives grounded in accurate and reliable data. Section 904 of the National Defense Authorization Act for Fiscal Year 2014 requires that DOD develop and submit a plan for streamlining management headquarters by June 2014. Unless DOD reevaluates its decision to focus reductions to management headquarters and establishes a clearly defined and consistently applied starting point on which to base reductions, the department will be unable to track and reliably report its headquarters reductions and ultimately may not realize significant savings. GAO recommends that DOD (1) reevaluate the decision to focus reductions on management headquarters to ensure meaningful savings, (2) set a clearly defined and consistently applied starting point as a baseline for the reductions, and (3) track reductions against the baselines in order to provide reliable accounting of savings and reporting to Congress. DOD partially concurred with the first recommendation, questioning, in part, the recommendation's scope, and concurred with the second and third recommendations. GAO continues to believe the first recommendation is valid, as discussed in the report.
The Social Security Act of 1935 authorized SSA to establish a record- keeping system to help manage the Social Security program, which resulted in the creation of the Social Security number. SSA uses the SSN as a means to track workers’ earnings and eligibility for Social Security benefits and assigns a unique number to every person and uses this number to create a work and retirement benefit record for the individual. SSA issues SSNs to most U.S. citizens, and they are also available to non- U.S. citizens with permission to work from the Department of Homeland Security. Also, in certain cases, SSA issues SSNs to non-U.S. citizens without permission to work who require a SSN to receive federal benefits or to lawfully admitted non-U.S. citizens to receive state or local public assistance benefits. Due to the number’s unique nature and broad applicability, the SSN has become the identifier of choice for government agencies and private businesses and is used for numerous non-Social Security purposes, such as opening bank accounts and filing taxes. Today, even young children need a SSN to obtain medical coverage, be claimed on their parents’ income tax return, or establish eligibility for other government or financial benefits. In fiscal year 2004, SSA issued approximately 5.5 million original SSNs and 12.4 million replacement cards, of which roughly 4.2 million originals and 2.3 million replacements were for U.S.-born children under age 18, as shown in table 1. SSA’s headquarters, in conjunction with its 1,333 field offices, issued these SSNs and cards to U.S. and noncitizen applicants. The primary guidance SSA uses to carry out its enumeration processes is the Program Operations Manual System (POMS). Under this guidance, SSA field offices are responsible for interviewing applicants for both original and replacement Social Security cards, reviewing evidentiary documents, verifying immigration and work status of applicants, and keying information into SSA’s automated enumeration system. Due to the burden on field offices from the increased demand for SSNs, SSA implemented its Enumeration at Birth program nationally in 1989 to provide parents with a convenient opportunity to request SSNs for their newly born children without visiting a field office. To facilitate this program, SSA contracts with state and jurisdictional vital statistics agencies to obtain specific information required to issue SSNs to newborns. As of March 2004, SSA had EAB contracts with 50 states and 3 independent registration jurisdictions, for a total cost of approximately $8.2 million. Hospital personnel, sometimes referred to as birth registrars or clerks, generally have overall responsibility for gathering and forwarding the birth certificate information and SSN requests to state or local vital statistics agencies. In instances when a baby is born outside a hospital, the responsibility for filing the birth information typically rests on the midwife or other person in attendance, one of the parents, or the person in charge of the place where the birth occurred. State vital statistics agencies are usually found within a state or jurisdiction’s department of health. According to the National Association for Public Health Statistics and Information Systems (NAPHSIS), vital statistics for the United States are obtained from the official records of live births, deaths, fetal deaths, marriages, divorces, and adoptions. The official recording of these events is the responsibility of individual states and independent registration areas, such as the District of Columbia and New York City. Governments at all levels—federal, state, and local—use vital records for statistical purposes through cooperative arrangements with the respective state agency. At the time of our audit work, most states restricted access to birth records, but according to NAPHSIS, 11 states allowed “open” access to birth records, meaning that no identification requirements or relationship is needed to obtain a certified copy of a birth certificate. Identity theft is one of the fastest growing crimes in the United States. In 2004, the Federal Trade Commission reported that in recent years some 10 million people—or 4.6 percent of the adult population—discovered that they were victims of some form of identity theft. These numbers translate into estimated losses exceeding $50 billion, but the extent to which these statistics represent children is not well documented. However, various news accounts and identity crime experts report that the theft of a child’s identity may go undetected for several years until an event—such as an application for a driver’s license—reveals that the child’s identity was stolen. Furthermore, children may also face future financial consequences if their SSN and name are used years before they first establish credit. SSA has two primary processes for issuing SSNs and replacement cards to children born in the United States. Today, most SSNs are issued to children through SSA’s EAB program, while the parents of a much smaller number of children receive SSNs through SSA’s field office application process. To ensure that SSNs are issued properly, SSA monitors data on EAB processing times and error rates and performs various integrity checks for those SSNs issued by field office staff. SSA also has specific policies for issuing replacement cards to children. In 2004, SSA issued roughly 90 percent of SSNs to children through its EAB program. As shown in figure 1, the majority of SSA’s enumeration workload involves U.S.-born children who generally receive their SSNs via states’ and jurisdictions’ birth registration process facilitated by hospitals. Under this process, SSA accepts birth registration data from vital statistics agencies as evidence of a child’s age, identity, and citizenship. In fiscal year 2004, SSA issued approximately 3.9 million SSNs through EAB. Hospital participation in EAB is voluntary. While SSA and state vital statistics agencies have strongly encouraged participation in the EAB program, the extent of nationwide hospital participation is unknown. However, SSA officials believe that most hospitals participate in EAB. Under EAB, parents in the hospitals we visited could request a SSN for their newborn child via the hospital’s birth registration worksheet. This worksheet was typically a standardized state-issued form used to capture the demographic and medical information required for the child’s birth certificate. For those parents choosing to participate in the program, SSA guidance states that hospital representatives should provide parents with a receipt as proof of their SSN request. Of the nine medical facilities we visited, all provided parents with the opportunity to request SSNs through EAB. However, each had established its own unique policies, procedures, and internal controls for collecting, maintaining, and transmitting birth information, including the SSN request, in accordance with federal and state statutes. For example, one state required that its hospitals submit all birth information directly to the state vital statistics agency within 72 hours of the birth, while in another state, the law required that birth information be submitted to the local registrar within 10 days of the birth. While submission timeframes varied among the states we visited, generally all hospitals transmitted birth information both electronically and manually to their respective vital statistics agencies. Birth registrars or clerks were typically responsible for entering the birth certificate information into an electronic birth registration computer system, which in some cases, was directly linked to the state vital statistics agency. Though system software varied among states, it tended to capture the same information and was generally equipped with the same security features. For example, in almost every hospital we visited, each birth registrar or clerk had a personal identification password and sign-on to access the system. Hospital personnel, in locations we visited that submitted data electronically, were required to make corrections to the electronic certificate prior to submitting it to their vital statistics office, because the computer system locked the record once the information was downloaded. This security feature prevented staff from making changes once the data were transmitted. However, if an error was discovered, hospital personnel could correct it with assistance from their respective vital statistics agency. For example, one state’s vital records office issued a single-use pass code to the responsible birth registrar for re-entry into the electronic registration system. In all the states in which we conducted interviews, we also found that once the vital statistics agency received the birth information, state officials reviewed the birth data submitted. These vital statistics agencies reviewed the data to ensure that the demographic and medical information was complete. In addition, officials from one location told us that they had also implemented other quality assurance measures, such as cross- referencing birth and death records and validating births that occur outside the presence of a physician or midwife. Subsequently, each office extracted and transmitted to SSA the information necessary to issue SSNs, including the name of the child, the date of birth, the parents’ names, and mailing address. SSA’s data showed that the average EAB processing times varied among vital statistics agencies and ranged anywhere from 3 weeks to 12 weeks. An SSA official told us that some of the main reasons for the variation in the processing time was due to the technological capabilities of vital statistics agencies to transmit information and personnel issues. For example, not all vital statistics agencies were electronically connected to each of their EAB reporting hospitals. Therefore, staff had to key the birth information into an electronic format, which increased the processing time. Figure 2 shows SSA’s EAB process. Once SSA receives the data, its automated system ensures that the EAB data are complete and scans for keying errors. If no errors are detected, SSA issues a SSN card to the parents’ mailing address. Throughout the EAB process, SSA also serves as a technical adviser, assisting vital statistics agencies with systems clarification and advice. Through its EAB contracts, SSA also establishes timeliness and accuracy requirements for EAB data and maintains this data. According to its contracts, SSA will not pay vital statistics agencies for any EAB data received more than 4 months after the month a birth occurs. However, SSA will accept such data for up to 11 months after the month a birth occurs. SSA also will not process any EAB submissions with an error rate greater than 5 percent. While the majority of U.S. children receive SSNs through the EAB program (3.9 million), the parents of a much smaller number apply for SSNs at SSA field offices. In fiscal year 2004, SSA issued approximately 294,000 SSNs to children through this process. SSA and hospital officials told us that some parents elect not to participate in the EAB program due to reasons related to religion or privacy. Consequently, parents of children not enumerated through EAB can apply for their child’s SSN by mail or in person at a local SSA field office. To apply for the SSN, parents complete a SSN application and submit at least two documents as evidence of their child’s age, identity, and citizenship as well as evidence of their own identity. Documents that SSA accepts as evidence of age, identity, and citizenship for a child include, but are not limited to, a birth certificate; a record from a doctor, clinic, or hospital; or a religious record. SSA requires that all evidentiary documents be either originals or copies certified by the issuing agency. Figure 3 highlights SSA’s field office process for assigning SSNs to children. In recent years, SSA has taken additional steps to strengthen its field office enumeration process to prevent identity theft. For example, in June 2002, SSA began requiring field office staff to verify the authenticity of certified birth certificates for any person age 1 or older applying for an original SSN. To obtain this third-party verification, SSA field offices generally mail or fax a photocopy of the original document to the pertinent vital statistics agency or in some limited cases query an electronic system linked directly to the vital statistics agency’s database. SSA must also send a fee to most state vital statistics agencies to cover the cost of this service. In September 2003, SSA also changed its evidence requirements to enhance its ability to verify the accuracy of SSN applications for children. For example, SSA lowered the requirement for a mandatory in person interview from age 18 to age 12. SSA field staff told us that they rarely have U.S.-born children over age 12 applying for an original SSN, and generally such applications receive closer scrutiny. In such instances, the interviews are used as an additional safeguard against identity theft to (1) establish that the child applicant actually exists and (2) corroborate the parent-child relationship. SSA also performs in-house reviews of its enumeration processes, including those for children. For example, field office managers use SSA’s Comprehensive Integrity Review Process (CIRP) to monitor specific systems activity for potential fraud or misuse by employees. In addition, SSA also requires field staff to review a SSN applicant’s supporting evidence and input it into SSA’s Modernized Enumeration System (MES), the computer system used to assign SSNs. Once entered, the applicant’s information undergoes numerous automated edits and is flagged if found suspicious. For example, MES suspends the processing of SSN applications for parents applying for new SSNs for numerous children in a 6-month period. In addition to processing original SSNs, SSA field offices also issue replacement SSN cards if a card is lost or stolen. To obtain a replacement card for a child, parents must complete a SSN application and indicate the SSN previously issued to the child. SSA requires that the applicant provide only proof of identity for both the parent and the child. A wide range of documents will satisfy the proof of identity requirement. For example, parents could use a daycare center record to prove their child’s identity and a church membership record as proof of their own, provided that the record includes key information required by SSA such as the person’s name, date of birth, and physical description. See table 2 for a comparison of SSA requirements for original SSNs and those for replacement SSN cards. Despite SSA’s efforts in recent years to improve its enumeration processes, several weaknesses persist, including a lack of EAB program oversight and outreach, inefficient birth verification procedures, and other vulnerabilities that could adversely affect the integrity and efficiency of SSA’s processes. Our Standards for Internal Control provide a framework to help agencies address such weaknesses. However, SSA has not fully incorporated some of these controls into its enumeration processes. For example, SSA has not undertaken action to assess the integrity of the EAB process and to keep informed of pertinent findings of other audit agencies that have reviewed states’ birth registration and certification processes. Furthermore, we found that previously reported enumeration weaknesses concerning SSA’s lack of a policy to require birth verifications for children under age 1 and a policy to further limit replacement cards continues to expose SSA to risks of fraudulent enumerations. SSA does not conduct periodic internal control or comprehensive integrity reviews to assess the integrity of procedures used to collect and protect EAB data at vital statistics agencies and hospitals, nor does it take advantage of available audit findings of other state agencies. SSA’s EAB contracts permit the agency to conduct on-site reviews of vital statistics agencies’ procedures for protecting confidential information. However, SSA’s Project Officer for the EAB program told us that the agency has not conducted such comprehensive reviews because of a lack of resources. The same official also told us that SSA has never used its EAB contracts to require that vital statistics agencies conduct similar reviews of participating hospitals. Our Internal Control Management and Evaluation Tool states that federal agencies should have adequate mechanisms in place to identify key programmatic risks arising from external factors, including a consideration of risks posed by major suppliers and contractors. Identified risks should then be analyzed for their potential effect and an approach devised to mitigate them. However, because SSA has not conducted reviews of entities involved in the EAB process, it lacks important information to develop a clearer picture of areas in the EAB process that may be vulnerable to error or fraud. A September 2001 SSA Office of the Inspector General (OIG) report on the EAB program suggested that more systematic oversight and management was needed. The OIG review identified serious internal control weaknesses in birth registration processes at selected hospitals that could compromise the integrity of EAB data. For example, the OIG found that hospital birth registration units often lacked adequate segregation of duties for clerks involved in the registration process. Our internal control standards state that key duties and responsibilities should be divided or segregated among different people to reduce the risk of error, waste, or fraud. Because clerks were generally involved in all phases of the birth registration process at the hospitals included in the SSA OIG review, the OIG said that the clerks could potentially generate birth certificates for nonexistent children. The OIG also found that the hospitals did not have controls in place to provide for periodic, independent reconciliations of birth statistics with the total number of birth registrations they reported, which could allow fictitious births to go undetected. During our visits to eight hospitals and one birthing center, we found similar internal control weaknesses. For example, some hospital birth units lacked the resources to segregate various job duties among its staff. While we found that birth registration staff in seven hospitals reconciled birth statistics with the total number of births, the staff performing these reconciliations were not independent of the birth registration process, which could expose the process to fictitious birth registrations. In light of the vulnerabilities that the OIG identified, SSA’s Project Officer for EAB told us that SSA and NAPHSIS have met to discuss an audit plan to review hospitals in each state. Some state audit agencies have conducted reviews of vital statistics agencies that have produced findings of potentially critical importance to SSA. However, due to a lack of coordination between SSA and state audit agencies, SSA was unaware of these audits and their findings, according to the EAB Project Officer. Our internal control standards require that federal managers promptly evaluate findings from audits and other reviews that could have an impact on their operations. Over the last several years state audit agencies have identified deficiencies in vital statistics agencies’ operations that could have negative implications for SSA’s EAB program and field office enumeration processes. For example: An August 2004 Maryland Office of Legislative Affairs audit report found that the controls at the state’s vital records agency were inadequate to ensure that birth certificates were issued to individuals authorized by law. Specifically, the report stated that Maryland applicants were not required to provide sufficient identification when requesting birth certificates, birth records for deceased individuals were not always marked as such, and adequate oversight of the local health departments regarding the issuance of birth and death certificates generally did not occur. Additionally, the audit found that controls over birth certificate forms and related blank stock certificates were inadequate to safeguard against fraudulent certificates being obtained for illegal purposes. Furthermore, the audit found that access to the vital records automated system was not adequately restricted. A November 2001 Florida Auditor General report disclosed that controls over the vital statistics program at selected county health departments were inadequate and not in compliance with vital statistics laws and rules. For example, the Auditor General found that the state registrar and several county health departments had not established effective controls to require documentation to show that certified copies of vital records were issued only to authorized recipients. The review also showed that controls over vital records security paper at selected county health departments were not adequate. A May 2000 State of New York Office of the State Comptroller audit report found several weaknesses in the New York City Office of Vital Record’s controls over the reporting, registering, and processing of vital records that could increase the risk that the reported number of birth and death records in New York City may not be accurate or complete. The audit also identified weaknesses in safeguarding vital records at two vital records sites. As of December 2002, the State Comptroller reported that New York City had made progress in implementing its recommendations. In discussing SSA’s oversight role relative to EAB and the vital statistics agencies, SSA’s EAB Project Officer told us that the EAB office had tried for the last 2 fiscal years to obtain funding for vital statistics agencies to review hospitals. Recently, the EAB office submitted a proposal for the fiscal year 2005 budget request, which included a measure to have vital statistics agencies review hospitals’ birth registration processes and about 30,000 EAB cases nationwide. However, according to SSA’s EAB Project Officer, the agency decided not to fund this initiative in fiscal year 2005 due to reduced budgetary resources and items determined as higher priorities. The EAB Project Officer also acknowledged that there is no formal or informal coordination between the EAB component and state Comptrollers and Inspectors General offices that may conduct reviews of vital statistics agencies. Because SSA does not have direct contact with such entities, the official said that SSA does not hear about special studies in states and that vital statistics agencies do not normally share these studies with SSA. SSA has provided only limited education and outreach to hospitals to ensure they consistently provide information to parents regarding the timeframes for processing EAB requests. Thus, some parents who opt for EAB, but may need an SSN sooner than is possible under the process, may apply a second time at a SSA field office; increasing the likelihood that their child will be issued two distinct SSNs. Our internal control standards require that management ensure that adequate means are in place to communicate with external stakeholders that may have a significant impact on an agency achieving its goals. However, SSA’s EAB Project Officer acknowledged that, beyond initial contacts with hospitals when EAB was first implemented in 1989, SSA has done little follow-up education and outreach to ensure that hospitals have up-to-date information necessary to effectively implement the program. SSA maintains information on EAB processing times for individual states, which is available via its Web site. The agency also developed Form 2853, for distribution to parents by hospital staff that lists the processing times for their state and serves as a receipt documenting the EAB request. However, during our fieldwork, we found that SSA has conducted no formal training or information sharing initiatives on these resources. Thus, hospital birth registration personnel were often unaware of the available processing time information, and staff at one hospital provided parents with information based on their own estimates or the prior experiences of other parents. We also found that only a few of the hospitals we visited were aware of Form 2853 and were providing it to parents. Thus, parents were not consistently receiving information from the hospitals on the time it takes to receive a SSN under the EAB process. Because parents do not always receive consistent or comprehensive information on EAB processing times, they may opt for the service even though they may need a SSN for their child sooner than is possible under the current process. In such instances, parents may later decide to apply for a SSN in a SSA field office, which could increase the likelihood that their child could be issued two different SSNs. In a September 2001 report, SSA’s OIG identified 67,206 instances nationwide in which parents submitted a second SSN application when they did not receive a SSN card for their children 1-year old or younger within 30 days of the child’s date of birth. In reviewing these applications, the OIG identified 178 instances in which SSA issued multiple SSNs to the same child. The report noted that SSA’s system edits do not always recognize SSNs previously assigned to children, especially if there are minor variances in the names provided on the two applications. The OIG noted that the assignment of more than one SSN to an individual creates the opportunity for SSN misuse. SSA’s EAB Project Officer acknowledged that SSA does not do as much education and outreach in the field as it did initially because of budget constraints, but that the agency is currently working with the National Association for Public Health Statistics and Information Systems (NAPHSIS) to strengthen its efforts to ensure hospitals have essential information to help parents make more informed decisions as to whether EAB will meet their needs. For example, SSA is seeking NAPHSIS’ assistance to distribute information on the EAB program and EAB processing times. SSA currently lacks a nationwide capability to quickly and efficiently perform required birth verifications for children whose parents apply for a SSN through an SSA field office, although a prior SSA pilot project proved successful in providing enhanced verification capabilities. As noted earlier, SSA currently uses a predominantly manual process to perform birth certificate verifications that is labor-intensive and time-consuming. For example, SSA staff may submit birth verification requests directly to vital statistics offices via the mail, fax, or in some locations, electronically. In several other locations, this process involves SSA staff manually completing the verifications at local vital statistics offices, which is another option for performing the verification. Our internal control standards state that control activities should be effective and efficient in accomplishing the agency’s control objectives. However, according to some field office staff that we spoke with, the current birth certificate verification process significantly prolongs the time required to process SSN applications, resulting in a backlog of verification requests in some locations. Field office staff in several offices cited examples of birth certificate verifications taking up to 6 months. Staff in one field office reported that they frequently encounter long waiting periods when they request birth verifications from some states. To expedite the verification process in this office, staff told us that they allow parents to obtain a certified copy of their child’s birth certificate from the issuing vital statistics agency and submit the document in a sealed envelope. Once SSA staff unseal the envelope and review the birth certificate, they continue processing the SSN application. In their view, this satisfied SSA’s requirement for independent verification. However, SSA policy officials told us that this action would not meet SSA’s requirement. We agree and believe that allowing parents to obtain and submit certified copies of birth certificates without independent verification exposes SSA to the potential for fraud. A recent SSA pilot with NAPHSIS, known as the Electronic Verification of Vital Events (EVVE) project, demonstrated that SSA field offices could perform vital records verifications in a quicker and more efficient manner than the current manual process. EVVE was piloted in eight states (California, Colorado, Hawaii, Iowa, Minnesota, Mississippi, Missouri, and Oklahoma) and local SSA offices in 26 states and territories. Figure 4 identifies all of the states and territories that participated in the EVVE pilot. Although EVVE was primarily established to expedite the processing of SSA retirement and disability claims filed by older SSA customers, it was also used to conduct required birth certificate verifications for children. EVVE processed two types of electronic queries: birth verifications and birth certifications. Birth verification queries were generally performed when SSA customers possessed a certified copy of their birth certificate. Conversely, birth certification queries were generally performed when customers did not present their birth certificate, technical problems arose with EVVE, or records were incompatible and prevented a successful online verification. Ordinarily, SSA policy requires that all individuals age 1 or older or their parents/guardians present a copy of a birth certificate or other acceptable documents to apply for a SSN, but SSA made an exception to this requirement for customers in the pilot states. Thus, even if a person could not produce his or her birth certificate, SSA would still process the application and query the system for an EVVE certification based on information orally obtained from the applicant. After receiving the query from SSA, the vital statistics agency would run it against its automated search system and return a “match” or “no match” response through EVVE’s messaging hub to the requesting SSA office. SSA, and not the applicant, assumed the costs for both types of transactions. The EVVE process is depicted in figure 5. Although SSA and NAPHSIS agreed that EVVE was a technical success because it demonstrated the capability to electronically verify vital documents, they did not move forward in implementing the system because of a breakdown in negotiations over the price of the service. During the pilot, SSA paid $5 for each birth verification query and $5 to $15 for each birth certification query. At the conclusion of the pilot, SSA attempted to negotiate a reduced price per query, but according to SSA officials in the Division of Electronic Service Delivery, Office of Automation Support, neither SSA nor the states could agree on what the price should be. To resolve this matter, SSA contracted with KPMG LLP (KMPG) to conduct a fair price assessment for the EVVE service. KPMG determined that SSA should pay a national price of $6.42 per query. In addition, KPMG stated that SSA would have to pay $.48 per query to cover the administrative costs of maintaining the EVVE messaging hub. However, at this price, state officials believed that they would lose revenue because SSA’s policy did not require applicants to present a birth certificate at the time of application. Thus, clients would likely purchase fewer birth certificates from vital records offices as a result. Unless SSA agreed to offset this potential revenue loss by paying more for each query, the states were reluctant to move forward. KPMG’s analysis concurred that for on-line birth queries, the majority of which were certifications during the EVVE pilot, the seven states included in its study (four of which were EVVE pilot states) would indeed realize revenue shortfalls. According to a SSA official, negotiations stalled. After the termination of the EVVE pilot in December 2003, SSA launched new negotiations with individual states to obtain access to their vital records. As of November 2004, SSA had negotiated individual agreements with six of the eight EVVE pilot states—Florida, Kentucky, Montana, Rhode Island, Tennessee, and Texas—which allow SSA offices in those states querying capabilities for vital statistics data, and four of these states allow browsing capabilities. While this access to some state vital records will help to expedite the birth certificate verification process in those states, it will not provide the capability for SSA offices to verify birth certificates nationwide, which could potentially be offered by EVVE or some other nationwide verification system. As a result, SSA field offices in states that do not allow access to vital records may still be subjected to long waiting periods to perform birth certificate verifications. In the absence of a nationwide electronic verification process, most SSA field offices are using paper-driven verification processes that involve the handling, storage, and disposal of birth certificates for which SSA has few controls. Once SSA requests and pays for birth certificate verifications, the servicing vital statistics office then mails a certified copy of the birth certificate to the SSA field office. During our fieldwork, we found considerable uncertainty among staff and various inconsistencies in the methods field offices used to collect, maintain, and dispose of these certified birth certificates. For example, after using the birth certificate to clear the SSN application, some offices shredded the document. However, in other offices, staff told us that they gave or mailed the document to the child’s parent, while other staff told us that they kept copies of the birth certificates in a personal desk file at their workstation. Our internal control standards require that agencies have appropriate policies and procedures to ensure appropriate documentation of transactions, maintain control over vulnerable assets, limit access to certain records, and assign accountability for their custody. In response to our discussion with SSA policy officials regarding such weaknesses in controls over birth certificates, SSA revised its policy on how to handle documents purchased for evidence in November 2004 to direct staff to shred birth certificates after processing the SSN application. However, this policy does not address how staff are to track the number of birth certificates received and record who handles them prior to their disposal, which could still make the documents vulnerable to theft or inappropriate disclosure misuse. SSA has not addressed two areas of its enumeration process that continue to expose the agency to fraud and abuse: the assignment of original SSNs to children under age 1 and the replacement of Social Security cards. In October 2003, we reported that SSA staff responsible for issuing SSNs relied on visual inspections of birth certificates to determine the identity of children under age 1 and did not independently verify this information with state or local vital statistics agencies. During our fieldwork, we documented a case where an individual had submitted a counterfeit birth certificate in applying for a SSN for a nonexistent child. We also demonstrated the ease with which individuals could obtain SSNs by exploiting SSA’s current processes. Working in an undercover capacity and posing as parents of newborns, our investigators used counterfeit documents to obtain original SSNs for two fictitious children. We subsequently recommended that SSA establish processes to independently verify the birth records of all children. In response, SSA said that it would conduct a study to determine the extent of the problem and the potential for fraud in the enumeration of children through field offices. Since that time however, in response to a July 2004 OIG report, SSA stated that due to privacy restrictions for birth data arising from the Health Insurance Portability and Accountability Act (HIPAA) and staff resource constraints, it would not conduct the study. In place of the study, SSA stated that it would review and evaluate the OIG’s work before making a final decision regarding this issue. In the same GAO report, we also noted that SSA’s policy for replacing Social Security cards, which allows individuals to obtain up to 52 per year; as well as its limited documentation requirements, also increased the potential for misuse of SSNs. Of the 18 million cards issued by SSA in fiscal year 2004, nearly 70 percent were replacement cards. We recommended that SSA reassess its policies in this area and develop options for deterring fraud and abuse. As we began this review, a SSA policy official told us that the agency was still assessing options. We recognize that the replacement card issue applies to all SSN holders and is not unique to children’s SSNs. However, there is a critical relationship between SSA’s policies for enumerating children under age 1 and issuing replacement cards that could be targeted by those seeking to fraudulently obtain valid SSNs. We are particularly concerned that individuals could first obtain original valid SSNs for fictitious children by exploiting weaknesses in SSA’s current verification process. These individuals could then take advantage of SSA’s replacement card policies to obtain excessive amounts of cards for resale to persons seeking to establish new identities, apply for state and local benefit programs, and/or work illegally in the United States. Such SSNs and replacement cards would be a valuable commodity to perpetrators of fraud because they are considered valid numbers in SSA’s records and would receive an affirmative match response in the event that an employer queried SSA’s system to verify the name and SSN. Field tests we conducted during this engagement underscore the urgent need to address this issue. Using the same original SSNs we obtained last year for two fictitious children under age 1, our investigators obtained numerous replacement cards over a relatively short period. By posing as parents of these children, two investigators obtained eight replacement cards in less than 6 weeks, before SSA field office staff placed an alert on the two SSN records indicating a suspicion of fraud, which restricted our investigators from receiving additional replacement cards. In particular, our investigators obtained seven SSN cards by applying in person at various SSA field offices and one card by mail using either a counterfeit baptismal record or an immunization record as evidence of the children’s identity, as well as counterfeit driver licenses as proof of identity for themselves. For the mail-in application, they submitted a baptismal record and an expired driver’s license and still received the card. This effort revealed inconsistencies in staff acceptance of baptismal records, which are a valid proof of a child’s identity under SSA’s current guidelines. Some staff accepted these certificates, while others rejected them. We found, however, that staff consistently accepted immunization records, which contain less identifying information about the child than baptismal records. This effort demonstrates that once a person obtains a SSN fraudulently, the problem can be perpetuated and exacerbated through the request for numerous replacement cards. It also shows that visual inspections alone are often insufficient to detect fraudulent documents. Recognizing the weaknesses in SSA’s enumeration processes, just prior to the issuance of this report, the Congress passed the Intelligence Reform and Terrorism Prevention Act of 2004. This act gave SSA 1 year to implement regulations for independently verifying the birth documents of all SSN applicants except for EAB and limited the issuance of SSN replacement cards to 3 annually and 10 over an individual’s lifetime. SSNs are essential for functioning in our society, and as a consequence, they are highly vulnerable to misuse. Safeguarding the integrity of children’s SSNs is particularly important, since the theft of a child’s SSN may go unnoticed for years. While SSA has taken steps to strengthen its enumeration processes, several vulnerabilities remain. Because SSA does not conduct comprehensive integrity reviews of vital statistics agencies or require that these agencies conduct similar reviews of hospitals, SSA lacks information to assess its potential exposure to error or fraud, identify aspects of the birth registration process that are most problematic, and develop safeguards to ensure the integrity of the data it relies on to enumerate millions of children. This situation is compounded by the fact that SSA has never coordinated with state audit agencies that have reviewed vital statistics offices in recent years and identified serious management and internal control weaknesses. In addition, SSA’s limited approach to education and outreach to hospitals transmitting SSN requests could be a factor in some parents submitting multiple applications and receiving more than one SSN for their child—ultimately increasing the program’s vulnerability to fraud. While enhanced oversight and management is key to SSA’s efforts, having an efficient mechanism to assist field staff in verifying the birth information of child SSN applicants is equally important. However, SSA’s field office verification process is generally labor-intensive, slow, and requires manual handling of paper birth certificates, with few controls over these sensitive documents. As the EVVE pilot demonstrated, viable technological options currently exist to enhance the exchange of vital statistics data between SSA and the states. We recognize that potential barriers, such as system implementation, may be difficult to overcome without congressional intervention. However, we continue to believe that better coordination and data sharing between SSA and vital statistics agencies nationwide could further strengthen SSA’s processes and address many of the factors that allow identity theft to occur. Finally, as our audit work and previous engagements show, SSA’s processes for verifying the birth records of children under age 1 and its policies for issuing replacement cards expose the agency to fraud. The recently passed Intelligence Reform and Terrorism Prevention Act of 2004 includes specific requirements to address these weaknesses. It is imperative that SSA act promptly in developing clear regulations and timeframes for the implementation of these additional program integrity provisions. In light of the key role certified birth certificates play in SSA’s enumeration processes and the potential for identity thieves to use fraudulent birth documents to obtain SSNs, the Congress should consider authorizing the development of a cost-effective nationwide system to electronically verify these documents. To strengthen the integrity of SSNs issued to children, we recommend that the Commissioner of Social Security: Explore options for improving internal control mechanisms to ensure the reliability of enumeration data from vital statistics agencies and hospitals. This could include conducting periodic integrity reviews of vital statistics agencies and requiring these agencies to perform periodic audits of hospitals and birthing centers. Establish a mechanism to better coordinate with external audit agencies that periodically conduct reviews of states’ birth registration and certification processes. Monitor the findings and recommendations of such reviews to mitigate risks to SSA’s enumeration processes. Provide additional education and outreach to hospitals to ensure they provide consistent information to parents to decide which process best meets their needs and to minimize second-time requests that might cause the issuance of multiple SSNs. Establish procedures for handling, securing, and tracking birth certificates obtained for verification purposes to fully protect against potential fraud and abuse. We obtained written comments on a draft of this report from the Commissioner of SSA. SSA’s comments are reproduced in appendix II. SSA also provided additional technical comments, which have been incorporated in the report as appropriate. SSA agreed with three of four recommendations we made to the Commissioner to strengthen SSA’s processes and internal controls over the issuance of SSNs for children. SSA disagreed with our recommendation that it explore options for improving internal control mechanisms to ensure the reliability of enumeration data from vital statistics agencies and hospitals. SSA noted that it is not within its purview to ensure the reliability of state agency or hospital enumeration data. However, we continue to believe that such actions are needed because the agency uses these data to assign SSNs for over 90 percent of U.S. citizens it enumerates. We believe that as the agency charged with issuing SSNs, SSA bears a responsibility for ensuring that vital statistics agencies have adequate procedures, and internal controls to ensure that the data hospitals provide are reliable and protected against fraud. Accordingly, we believe that SSA could include a provision in its EAB contracts to allow SSA to conduct periodic reviews of the reliability of EAB information at vital statistics agencies. Similarly, we believe that SSA could also encourage vital statistics agencies to conduct such reviews at hospitals. SSA agreed with our recommendation and fully supported EAB outreach efforts to hospitals, and noted that they are currently supplying SSA field offices with materials for distribution to their service area hospitals. However, as our report shows, field office officials we spoke to were often unaware of this effort. Therefore, SSA should take sufficient steps to ensure that field office staffs are supplied with appropriate EAB materials and that hospitals actually receive these materials. SSA also agreed with our recommendation to better coordinate with audit agencies and stated that the agency will encourage state auditing agencies to share report results so that SSA can monitor the findings and recommendations to mitigate risks to the enumeration process. Finally, SSA agreed with our recommendation that it establish procedures for handling, securing and tracking birth certificates obtained for verification purposes, and stated that recently implemented procedures adequately addressed our concern. While we commend SSA for implementing procedures for disposing of birth certificates after they are verified, we found no procedures in the materials provided with SSA’s response that addressed how birth certificates are to be tracked and secured from the time of receipt through disposal. We encourage SSA to establish or better define procedures in existing POMS regulations in these areas to protect birth certificates against theft or inappropriate disclosure and misuse. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Commissioner of SSA, the Secretary of Homeland Security, the states and three jurisdictions’ vital statistics offices, and other interested parties. Copies will also be made available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have questions concerning this report, please call me on (202) 512-7215. Key contributors to this report are listed in appendix III. The Chairman of the Senate Committee on Finance asked us to document the Social Security Administration’s (SSA) current processes and internal controls for issuing Social Security numbers (SSN) and replacement cards to U.S.-born children under the age of 18 and identify any weaknesses that may affect SSA’s ability to ensure the integrity of the SSN and the efficiency of enumeration processes. To address the Chairman’s questions, we examined SSA’s enumeration policies, procedures, and internal controls and obtained information on key initiatives planned and undertaken to strengthen SSA’s processes. To document SSA’s current processes and internal controls, we examined SSA’s Program Operations Manual System (POMS) for applicable requirements and held structured interviews with SSA headquarters officials to discuss the processes used to enumerate U.S.-born children. We also reviewed SSA’s current processes for enumerating children to identify areas where existing and recent changes to POMS were not being followed, or where POMS guidance was not sufficient to ensure the integrity of the identified enumeration processes. We identified two processes—Enumeration at Birth (EAB), a program set up to let parents request their child’s SSN through hospitals and vital statistics agencies, and another that requires application through SSA field offices. For the EAB process, we reviewed EAB contracts for individual states and information on timeliness and accuracy for enumerating children. We also documented the policies and procedures for select hospitals and vital statistics agencies that facilitate SSA’s process for enumerating newborns. We contacted the National Association for Public Health Statistics and Information Systems (NAPHSIS), which represents vital statistics agencies, to gain a better understanding of their role in EAB. We selected vital statistics agencies, in part, based on best practices identified by SSA’s EAB Project Officer and the Executive Director of NAPHSIS, and in some cases, based on the agencies’ lengthy timeframes for submitting EAB information to SSA. In addition, we worked with the vital statistics agencies and SSA to identify hospitals with relationships to the selected vital statistics agencies that had a high volume of births and that participated in the EAB program. In one jurisdiction, we also visited an organized birthing center to better understand any differences between a high-volume physician-based hospital and a midwife-based birthing facility’s birth registration process. From both the vital statistics agencies and hospitals, we collected and reviewed information related to the required birth registration process as well as examined policies and procedures related to the electronic birth certificate software to better understand the variances between individual states and jurisdictions. To identify weaknesses that may affect SSA’s ability to efficiently enumerate children and ensure the integrity of the SSN, we collected and examined information on SSA’s enumeration initiatives, the results of prior internal reviews, and studies performed by SSA’s Office of the Inspector General (OIG) and the Office of Quality Assurance and Performance Assessment. We documented these officials’ perspective on SSA’s enumeration initiatives and identified areas where vulnerabilities and gaps exist in SSA’s implementation of these policies through interviews. In addition, we examined external reviews conducted by individual state comptrollers and inspectors general to determine existing weaknesses and concerns regarding state vital statistics agencies’ birth registration and certification processes. To identify these reports, we worked with the National Association of State Auditors, Comptrollers and Treasurers (NASACT) to contact each of their membership, which consists of the offices of state auditor, state comptroller, or state treasurer in the 50 states, the District of Columbia, and U.S. territories. In addition, we contacted state auditing and inspectors general divisions in states and jurisdictions that we visited. After receipt of specific audit and investigative reports from responsive states, we examined each report for the adequacy of its methodology and the validity of its conclusions. We also contacted NAPHSIS to identify weaknesses and recommended areas of improvement for an electronic verification pilot project it had with SSA called the Electronic Verification of Vital Events (EVVE). We examined the EVVE evaluation study, which highlighted analysis of EVVE’s error rates, query types, and records of no-matches. Also, our Office of Special Investigations tested SSA’s enumeration practices to illustrate how SSA’s policy on replacing Social Security cards can be abused. By posing as parents of the two fictitious children we previously had enumerated by SSA, and reported on in October 2003, our investigators used these SSNs and counterfeit documents to obtain SSN replacement cards through visits to local SSA field offices. In addition, we also solicited replacement cards through the mail. We conducted our review at SSA headquarters in Baltimore, Maryland, and at four regional offices—Atlanta, New York, Philadelphia, and San Francisco—and 10 field offices. We selected the SSA regional and field offices based on their geographic location, the volume of enumeration activity, and participation in certain enumeration projects. We also spoke with officials in vital statistics agencies and medical facilities in the states of California, Georgia, Kentucky, and Maryland and the city of New York. For our work on replacement cards, our Office of Special Investigations conducted its work in seven SSA field offices located in the District of Columbia, Maryland, and Virginia. We performed our work between January and December 2004 in accordance with generally accepted government auditing standards. The following team members made key contributions to this report: Richard Burkard, Jean Cook, Mary Crenshaw, Paul Desaulniers, Jason Holsclaw, Corinna Nicoloau, Andrew O’Connell, and Roger Thomas.
In fiscal year 2004, the Social Security Administration (SSA) issued about 4.2 million original Social Security numbers (SSN) and 2 million SSN replacement cards to U.S.-born children. Despite its narrowly intended purpose, today, young children need a SSN to be claimed on their parent's income tax return or to apply for certain government benefits. Because children's SSNs, like all SSNs, are vulnerable to theft and misuse, the Chairman of the Senate Committee on Finance requested that GAO (1) document SSA's current processes and internal controls for issuing SSNs to U.S.-born children under the age of 18 and (2) identify any weaknesses that may affect SSA's ability to ensure the integrity of the SSN and the efficiency of enumeration processes. SSA has two processes for issuing SSNs to U.S.-born children--one that allows parents to request SSNs through a hospital during birth registration and one that permits them to apply through SSA field offices--both of which include various internal control mechanisms. Today, SSA issues the majority of SSNs to children through its Enumeration at Birth (EAB) program. Participating hospitals forward the SSN request and other birth registration data to vital statistics agencies, which then send it to SSA. SSA's automated system ensures that the data are complete and mails the SSN to the parent. Parents may also request SSNs through SSA field offices by mail or in person. This process requires parents to present proof of the child's age, identity, and citizenship as well as proof of their own identity. As fraud prevention measures, SSA also interviews children 12 and older and verifies documents with a third party for those over age 1. If a child's SSN card is lost or stolen, parents may also apply for a replacement card. SSA requires proof of identification for both the parent and the child to obtain such cards. Despite SSA's efforts to improve its enumeration processes, weaknesses persist in EAB program oversight and outreach, manual birth verification procedures, and other vulnerabilities that could adversely affect the integrity of SSA's processes. Federal internal control standards state that agencies should assess and mitigate risk to their programs. However, SSA does not conduct comprehensive integrity reviews, or coordinate with external auditing agencies to ensure that vital statistics agencies and hospitals are properly collecting and protecting enumeration data for children. In addition, SSA lacks a nationwide capability to efficiently verify birth certificates for children whose parents apply through SSA field offices, although a prior SSA pilot proved successful in providing such verifications. Further, SSA lacks a policy for securing and tracking birth certificates once manual verifications are complete, making these documents vulnerable to misuse. Finally, SSA's policies for verifying birth certificates of children under age 1 and for issuing replacement SSN cards, which allow for up to 52 cards annually, remain weak and could expose the program to fraud. The Intelligence Reform and Terrorism Prevention Act of 2004 will assist SSA in protecting the integrity of the SSN by requiring the agency to verify birth documents for all SSN applicants, except for EAB purposes, and limit the issuance of SSN replacement cards.
As of March 2002, State had 16,867 American employees worldwide—more than one-third of whom are overseas. Of those serving overseas, about 60 percent are stationed at hardship posts. Of the 158 hardship posts, nearly half are found in Africa and Eastern Europe and Eurasia, which includes the Newly Independent States (see fig. 1). State defines hardship posts as those locations where the U.S. government provides differential pay incentives—an additional 5 to 25 percent of base salary depending on the severity or difficulty of the conditions—to encourage employees to bid on assignments to these posts and to compensate them for the hardships they encounter. Among the conditions State uses to determine hardship pay are poor medical facilities,substandard schools for children, severe climate, high crime, political instability, and physical isolation. Recently, State has begun recognizing the lack of spousal employment opportunities as another factor in determining hardship. Where conditions are so adverse as to require additional pay as a recruitment and retention incentive, State can provide additional differential pay of up to 15 percent of base salary. Moreover, State pays an additional 15 percent to 25 percent of salary for danger pay to compensate employees for the security risks they face in certain countries. Under State’s open assignment system, employees submit a list (bids) of assignments they want and then the department tries to match bidders’ experience and preferences with the needs of posts and bureaus. (For an overview of the bidding and assignment process, see app. II.) The Department of State has reported a shortage of professional staff in its Foreign Service overseas workforce. Many positions at hardship posts, including some of strategic importance to the United States, remain vacant for extended periods of time or are filled with staff whose experience or skills fall short of the requirements for the position. Our discussions with former and current ambassadors, senior post officials, and the regional bureaus indicate that this is a widespread problem that weakens diplomatic programs and management controls and impedes posts’ ability to carry out U.S. foreign policy objectives effectively. In the three countries we visited—China, Saudi Arabia, and Ukraine—we found that (1) mid-level officers were working in positions well above their grade, (2) first-tour officers were in positions that require experienced officers, and (3) staff did not meet the minimum language proficiency required to perform their jobs effectively. However, the magnitude of this problem on an aggregate level is unclear because State lacks certain human resources data that are necessary to fully assess staffing limitations and capabilities worldwide. State has more positions than it has staff to fill them. As shown in table 1, the State Department reported a staff deficit of 1,340 employees worldwide as of March 2002. The biggest shortages are in overseas Foreign Service employees, which had a staff deficit of 543, and in the civil service, which had a staff deficit of 811. According to State, 60 percent of its Foreign Service overseas workforce are in hardship posts, which have a vacancy rate of 12.6 percent, compared with a vacancy rate of 8.4 percent in nonhardship posts. Data from posts in the seven countries we reviewed showed staffing shortfalls, in varying degrees. (Key staffing issues in these selected countries are outlined in app. III.) These shortfalls, according to ambassadors and senior post officials, compromise diplomatic readiness. We found many employees working in positions well above their grade levels as well as staff who did not meet the minimum language proficiency requirements of the positions to which they were assigned. Moreover, post staff complained of the lack of training to upgrade their language proficiency and other skills. Senior post officials, including chiefs of mission and former ambassadors, stated that staffing shortfalls (1) weaken diplomatic programs and management controls and (2) impede posts’ ability to effectively carry out U.S. foreign policy objectives. For a number of the hardship posts we examined, the dual problem of a shortage in the number of positions a post has and the lack of fully qualified, experienced, and trained staff to fill them has been a long-standing concern, dating back to the 1990s when hiring below attrition levels resulted in what some State officials characterize as the “hollowing out” of the Foreign Service workforce. The State Inspector General has issued numerous reports citing serious problems filling hardship posts with adequately skilled staff. In a May 2001 semiannual report to the Congress, the Inspector General stated that inadequate training for first-tour staff in consular offices has led to lapses in nonimmigrant visa management at posts in a region where alien smuggling and visa fraud are prevalent. Furthermore, in Conakry (Guinea)—a 25 percent hardship post where visa fraud and administrative problems were attributed to inexperienced staff—the Inspector General found a high proportion of junior officers, mostly on their first tour, and officers in positions above their grade, making them ill-prepared to deal with work challenges. Similarly, in Bamako (Mali), another 25 percent hardship post that is chronically understaffed, the Inspector General again cited staff inexperience when consular employees failed to detect an alien smuggling ring. In these cases, the Inspector General called on the State Department to examine whether staff assigned to these posts have the level of experience necessary to operate effectively. Meanwhile, chronic staffing problems experienced in many African posts persist, and because consular positions worldwide are often filled by lower level staff, the Bureau of Consular Affairs considers African posts at risk. In Lagos (Nigeria), for example, 12 State positions were unfilled as of February 2002; and many of those filling positions were first-tour junior officers and civil service employees who had never served overseas. In the 10-officer consular section in Lagos, only the consul had more than one tour of consular experience. According to bureau and post officials, with virtually no mid-level Foreign Service officers at post, the few senior officers there were stretched thin in training and mentoring junior officers. While the State Department considers assignment of employees to positions that are at grade and within their functional specialty to be the most effective use of its human resources, many employees are working in positions well above their grade. State policy does allow “stretch” assignments—positions either above an officer’s grade (an “upstretch”) or below an officer’s grade level (a “downstretch”)—at certain points of the assignments cycle and under certain conditions. For instance, when there are no eligible bidders at grade, an upstretch assignment may be made for positions that are hard to fill, including those at high differential posts (15 percent or higher) and posts that are among the most difficult to staff. State officials pointed out that one-grade stretches are often offered as a reward and as career-enhancing opportunities for those who have demonstrated outstanding performance. Thus, human resources officials at State cautioned us that while global information on employees working in positions above their grade could be generated from the department’s personnel database, records would need to be examined on a case-by-case basis to determine the rationale for each individual assignment. In the countries we examined for our review, we focused on staffing data for those officers working two or three grades above their rank. We found instances where this occurred, often with junior officers serving in mid- level and, occasionally, senior-level positions. For example, in Kiev (Ukraine), about half of the Foreign Service officer positions were staffed by junior officers or others in the positions for the first time; several employees were working in positions at least two levels above their grades. In addition, with the consul general position vacant in Kiev for a year and the deputy consul general position vacant for 15 months, a junior officer was serving as acting consul general. A similar situation occurred at a U.S. consulate in Russia when an untenured junior officer was serving as the consul general in 2001. A junior officer told us that, prior to joining the Foreign Service in 1999, he was hired as a part-time intermittent temporary employee in Almaty (Kazakhstan) to serve for 7 months as consular chief at the embassy. Data from several of our post staffing reviews suggest that language requirements make it more challenging to staff some hardship posts— particularly those with languages that are hard to learn. Many of those assigned to these posts lacked the minimum language proficiency to perform their jobs effectively. State officials emphasized the importance of language proficiency to perform effectively, and as one former ambassador stated, “a Foreign Service officer who does not know the language would be inhibited at every turn.” Based on our review of language capabilities of Foreign Service employees at the seven countries we examined, we found that many staff lacked the minimum language proficiency requirements of the positions to which they were assigned. For example, post officials told us that at the U.S. mission in China, 62 percent of Foreign Service employees did not meet the language proficiency requirements of their positions. In Russia, 41 percent of U.S. mission employees did not meet the language proficiency level designated for their positions. In Pakistan, five public diplomacy positions in Islamabad, Lahore, and Karachi were held by employees without the language proficiency State would consider useful. In Saudi Arabia, the head of the public diplomacy section at a consulate had no Arabic language skills. According to post officials, language requirements are regularly lowered or waived to fill some positions quickly and reduce lengthy staffing gaps. To compensate for this, missions like China and Russia offer staff the opportunity to pursue language training while they are at post. Although staff felt these opportunities were very helpful, they told us that such training was difficult to pursue because the languages were extremely hard to learn and heavy workloads prevented them from devoting time during normal working hours for training. State’s human resources data system does not provide complete and accurate information that can be readily used for management purposes. More specifically, State officials could not provide, on a global basis, information necessary to assess the extent of staffing shortfalls, including whether the experience and skills of employees match those needed for the positions they fill. We have reported that valid and reliable data are a key element to effective workforce planning and strategic human capital management. While State officials told us they are making significant efforts to improve the department’s mechanisms for workforce planning, we found the existing human resources data that State maintains and analyzes to be limited. For example, State does not maintain historical bidding data, data on directed assignments, and data on the dispersion of employee ratings and promotions at an aggregate level and the extent to which hardship service was considered in these personnel actions. In addition, State does not regularly analyze assignment histories to determine how the burden of hardship service is shared among Foreign Service employees. Finally, State has not fully assessed the impact that financial incentives and disincentives may have on recruiting employees for hardship posts. In January 2002, we reported that State had difficulties generating a consistent global aggregate measure of its actual language shortfalls because of inadequate departmentwide data on the number of positions filled with qualified language staff. State officials acknowledged errors in data collection and processing and indicated that corrective action was imminent, but as of May 2002, the human resources bureau was still unable to generate accurate language information from its database. State’s assignment system is not effective in staffing hardship posts. While Foreign Service employees are expected to be available to serve worldwide, few bid on positions at some hardship posts, and very few— excluding junior officers, whose assignments are directed—are forced to take assignments they have not bid on. We found that State’s mechanisms for sharing hardship service and determining staffing priorities have not achieved their intended purposes—to place qualified personnel in appropriate positions while meeting the needs of the Foreign Service and the employees’ professional aspirations and career development goals. Furthermore, financial and nonfinancial recruiting and retention incentives have not enticed employees to bid on some hardship posts in sufficient numbers. According to State officials, the problem of staffing hardship posts is exacerbated by a shortage of officers in the mid-level ranks, as well as certain restrictions such as medical problems (an employee’s or a family member’s), difficulty obtaining jobs for spouses, inadequate schooling for children, or the time to become proficient in a difficult language. (App. III discusses many of the key staffing issues at selected posts.) State has launched an aggressive program to hire more staff, but absent a comprehensive approach to human capital management that addresses the needs of hardship posts, these efforts may still fall short of putting the right people where they are most needed and filling the most demanding positions with the most experienced talent. Foreign Service employees are obligated to serve overseas, and mid-level and senior officers are expected to serve a substantial amount of time overseas. However, there is no requirement for hardship service, and the primary approaches State uses to encourage and steer employees toward hardship service have fallen short of their intended objectives to fill critical staffing gaps and to share the burden of hardship assignments. One example illustrating this problem is the assignment of senior officers. These officers are needed at overseas posts, particularly at hardship posts, to apply their experience and give guidance to junior officers. However, as we discuss later, senior officers nearing retirement often prefer to complete their careers in Washington for financial reasons. State’s assignment system tends to accommodate these preferences even though this means that some service needs at hardship posts will not be met. Although procedures are in place to force employees into assignments if there is an urgent service need to fill a position, procedures for directed assignments have rarely been enforced in recent years. Because State does not routinely track the number of directed assignments made, statistics for the 2001 and 2002 assignments cycles were not available. However, previously recorded data showed that only 39 assignments were directed by the Director-General in 1998, 37 in 1999, and 12 from January to June 2000. At the same time, State has no criteria that clearly define what constitutes an urgent service need—leaving this determination for the functional and regional bureaus, rather than the human resources office that coordinates assignments, to make. In a February 2002 joint statement, the Director-General of the Foreign Service and the American Foreign Service Association underscored the need to strengthen worldwide availability of Foreign Service employees and called for more aggressive enforcement of existing procedures so that Foreign Service employees serve where their skills are needed most. While there were those who favored directed assignments to deploy staff where and when they are needed, many State officials we interviewed were concerned that such an approach would only create more problems at the post level because employees who are forced into positions they do not want are more apt to have poor morale and be less productive. Based on an expectation that Foreign Service employees be available for their share of hardship assignments, State has special bidding requirements for employees who have not served at a hardship post in the last 8 years. Under the program, Foreign Service employees who have not served 18 months at a hardship post in an 8–year period are considered “fair share” bidders. However, State does not require that these bidders actually be assigned to hardship posts. In fact, rules under this program permit some fair share bidders to bid only on domestic positions. If fair share bidders bid on any overseas assignment, three of the six bids that they submit at their grades and within their specialty must be on hardship posts. Bidders may include up to three bids on assignments one level above their grade at 15 percent hardship posts or higher. However, employees may still choose to bid on posts with lesser hardship (5 to 10 percent differential). In the 2001 assignments cycle, 464 employees were designated as fair share bidders. As shown in figure 2, the vast majority of the fair share bidders—322—were assigned to domestic positions or nonhardship posts. Only 79 bidders, or 17 percent of the total, received hardship assignments. Of this number, 49 bidders were assigned to the greater hardship posts—those with a pay differential of 15 percent or higher. The remaining 63 bidders have already retired or resigned from the Foreign Service or will retire or resign soon. Recognizing that it faced a staffing deficit, State in the past engaged in an exercise just prior to the assignments cycle to identify those positions that are less essential and, therefore, it would not fill. However, this exercise was not based on realistic expectations of the number of employees available for placement, and State continues to advertise positions for which it has no staff to fill. For example, in June 2000, only 53 mid-level generalist positions were on the list of positions State decided not to fill— a fraction of the 222 mid-level generalist positions that the department identified as the shortfall for the 2001 cycle. For the 2002 cycle, State officials decided not to designate positions it would not fill. Instead, because of increased hiring, in July 2001, regional bureaus identified about 120 mid-level positions to be offered to and filled by junior officers—also well below the staffing shortfall of 607 mid-level positions in this current cycle. Neither of these actions prioritized the positions that needed to be filled based on the actual number of employees available for new assignment, and neither is based on an assessment of State’s staffing priorities worldwide. Several former and current ambassadors with whom we met believe the assignment process should include a rigorous and systematic assessment upfront that identifies critical positions that need to be filled based on State’s worldwide strategic priorities and other positions that, although important, should not be filled until State has more staff available. In analyzing bidding data for the 2001 and 2002 summer assignments cycles, we found that positions at hardship posts received significantly fewer bids on average than positions at nonhardship posts. In addition, many mid-level positions at posts with significant U.S. interests had few or no bidders, and the higher the differential incentive paid for a hardship assignment, the fewer the number of bidders. Figure 3 shows the average bids on mid-level positions at overseas posts by differential rate for the 2002 summer assignments cycle. As the graph shows, nondifferential posts such as London, Toronto, Canberra (Australia), Madrid, and The Hague are highly sought, and received, on average, 25 to 40 bids per position. On the other hand, many positions at hardship posts received few, and sometimes no bids. For example, posts such as Karachi (Pakistan), St. Petersburg (Russia), Shenyang (China), Lagos (Nigeria), Kiev (Ukraine), and Jeddah (Saudi Arabia) received, on average, two, one, or no bids per position. We found that, in the 2002 assignments cycle, 74 mid-level positions had no bidders, including 15 positions in China and 10 positions in Russia. Figure 3 may suggest that the hardship pay has not been sufficient to attract bidders to certain posts, even at posts where employees can earn an additional 25 percent above their base pay. In fact, according to a State Department Inspector General’s survey issued in 1999 of Foreign Service employees, 80 percent of the respondents did not believe that the differential pay incentives were sufficient to staff hard-to-fill positions. The line in the graph (fig. 3) shows the median of the average number of bids for each differential rate. As the line indicates, the median of the average at a nonhardship post is about 14 bids while the median of the average at a 25 percent differential rate post is about 3 bids. For a complete list of the countries that we identified as the most heavily bid and underbid for the 2001 and 2002 cycles combined, see table 10 in app. IV. According to State, the biggest shortages are for Foreign Service generalists in the mid-level ranks, particularly in the administrative, consular, and public diplomacy areas, as well as Foreign Service specialists who provide infrastructure support services. It is in these areas that positions tend to have fewer bidders—oftentimes two or fewer bidders who meet the grade and functional specialty requirements, the threshold at which State considers a position hard-to-fill. As shown in table 2, we analyzed the average number of bids submitted for the 2002 assignments cycle and found an average of fewer than three bidders for administrative and consular positions in 20 and 25 percent hardship posts; and an average of fewer than three bidders for public diplomacy positions in 15 and 25 percent hardship posts. Finally, Foreign Service specialist positions in 25 percent hardship posts also had, on average, fewer than three bidders. Based on these data, it appears that, on average, positions in other functional areas and in the lesser hardship posts (e.g., economic, political, and rotational positions in nondifferential posts) have a greater supply of interested bidders. To fill positions that are difficult to staff, primarily in hardship posts, State’s policies allow bidding and assignment rules to be relaxed when there are not enough bidders. In addition, various employment mechanisms are available to allow post management to fill staffing gaps with temporary or limited-term personnel when necessary. While these options help ease the staffing problems at hardship posts and offer short- term relief, they are less than ideal. Senior post officials acknowledged that employing staff with less experience and expertise than the positions require impedes the efficiency of post operations but that the alternative— absorbing the impact of extended staffing gaps—is worse. Bidding and assignment rules may be relaxed for (1) hard-to-fill positions—where there are two or fewer fully qualified bidders who are at grade and are in the designated specialty and (2) posts that are identified as among the most difficult to staff—where 50 percent of the positions advertised have two or fewer bidders. Ninety-eight, or about 38 percent, of the posts overseas met the criteria to be designated most difficult to staff in the 2002 assignments cycle. To staff positions at these posts, State eases certain rules, which could compromise diplomatic readiness. For example, to attract employees to bid on these positions, the department may allow stretch assignments early in the assignments cycle, waive language requirements, or offer unusually short tours of duty (12 to 18 months). The vast majority of the most-difficult-to-staff posts are in the Bureau of African Affairs, with about 40 percent (39) of the posts, and the Bureau of European and Eurasian Affairs, with 27 percent (26 posts, mostly in the Newly Independent States). (A complete list of U.S. diplomatic posts worldwide is shown in app. V.) In addition, State offers assignment opportunities for State Department civil service employees to temporarily fill Foreign Service positions that remain underbid. State targeted 50 such positions to fill in 2001. In 2002, State established a limit to fill 50 Foreign Service positions with civil service employees, including those who were already in the program and went on to a subsequent tour. Approximately 200 civil service employees are now assigned to hard-to-fill positions overseas that are ordinarily staffed by Foreign Service officers. In a report to State in March 2001, the Office of the Inspector General supported using civil service employees to fill overseas vacancies but stated that the program had not substantially reduced the systemwide staffing shortage. Moreover, despite widespread support for the program, use of civil service staff in Foreign Service positions raises workforce planning concerns, particularly for the bureaus that are sending, and thus temporarily losing, their civil service staff. State also employs retired Foreign Service officers for temporary duty, international fellows and presidential management interns, family members, and American residents who are hired locally as part-time intermittent temporary employees or on personal services contracts or agreements. According to post officials, although these staff augment the capabilities of mission operations, the methods by which they are hired, the tasks to which they are assigned, and the employee benefits to which they are entitled are not applied consistently—thereby raising some personnel and morale issues at the post level. State does not regularly analyze how the burden of hardship service is being shared among Foreign Service officers, although this has been a long-standing concern. To measure how the burden is shared, we analyzed the careers of 1,100 mid-level Foreign Service officers who were hired between 1986 and 1991, which represents about 10 to 15 years of service. We performed the analysis by using the Lorenz curve, which is a methodology traditionally used to measure income inequality. Figure 4 shows the relationship between the percentage of employees and the percentage of weighted hardship burden. (For a detailed discussion of our methodology, see app. I.) The graph is an indication of how the hardship burden is being shared. The broken diagonal line represents perfect sharing of burden while the curve reflects how the actual burden is shared. The data indicate that half of the officers experienced 27 percent of the hardship burden while the other half experienced 73 percent (point A). Viewed another way, the bottom 20 percent of employees served 5 percent of the hardship (point B) while the top 20 percent served about 37 percent of the hardship (point C). State officials noted several reasons why some employees cannot serve at certain hardship posts, such as medical conditions, inadequate schools for their children, and a lack of spousal employment opportunities. State offers some financial incentives for hardship service, which have yielded mixed results. These financial incentives include a post differential allowance (or hardship pay) ranging from 5 to 25 percent of base pay to compensate employees for service where environmental conditions differ substantially from those in the United States and to entice them to serve.While there are factors other than money that may keep an officer from bidding for a position at a particular hardship post or restrict an officer’s options to only selected posts, our analysis of bidding data (fig. 3) suggests that the differential rate does not appear to be effective in enticing a significant number of employees to certain posts. To address this issue, in 2001, State began to provide an additional 15 percent incentive to those who sign up for a third year at selected 2-year posts that have been extremely difficult to staff. According to State officials and Foreign Service employees, the incentive provided by differential (hardship) pay for overseas service has been diminished by rules governing locality pay. Locality pay is a salary comparability benefit to attract workers in the continental United States to the federal government versus the private sector. In 1994, an executive order began the process of allocating annual governmentwide pay increases between base pay and locality pay. However, Foreign Service employees serving overseas do not get locality pay. Thus, the differences in the statutes governing differential pay for overseas service and locality pay have created a gap between the compensation of domestic and overseas employees—a gap that grows each year as locality pay rates continue to rise by 1 percent or more annually. State has not analyzed the effect that this difference has had since 1994 on the number of Foreign Service employees who bid on overseas assignments, including hardship posts. However, State Department officials, the American Foreign Service Association, and many officers with whom we met said that this gap penalizes overseas employees and that if it continues to grow, it will inevitably keep employees from choosing an overseas career in the Foreign Service. Figure 5 illustrates the effect that increases in locality pay have on the relative value of overseas differential rates. As figure 5 shows, the advantage of overseas pay with differential has eroded over time and locality pay has created a financial disincentive for all overseas employees. As of January 2002, the locality pay rate for Washington, D.C., was 11.48 percent. We estimate that by 2006 and 2010, the differential pay incentives for the 15 percent and 20 percent differential posts, respectively, will be less than the locality pay for Washington, D.C., assuming that the locality pay rate continues to increase at about 1 percent per year. In addition, Foreign Service employees we interviewed emphasized that it is also a financial disincentive to retire while serving overseas because post differential is not used to determine an officer’s retirement benefits whereas locality pay, which is offered to those employees who serve in Washington, D.C., is factored into the retirement benefit. According to State human resources officials, retiring with a high three average salary calculated on service abroad can result in a substantial reduction in annuity annually, compared with a Washington-based high three average salary. As a result, a significant number of employees who are nearing retirement return to Washington, D.C., for their last tour of duty to have their locality pay factored into their high three salaries for purposes of calculating retirement benefits. In fact, according to State, since 1997, 62 percent of senior Foreign Service and management level employees who retired concluded their careers in Washington rather than from an overseas tour. This exacerbates the problem of staffing hardship posts because the most experienced employees tend not to choose overseas service during their last tour of duty. To address these overseas pay and retirement benefit issues, State, with the support of the American Foreign Service Association, proposed that Foreign Service employees working overseas should get locality pay equal to the Washington, D.C., rate. The Office of Personnel Management agrees that locality pay should be extended to overseas employees and has asked the Office of Management and Budget to consider this issue. The State Department estimates that it would cost $50 million to $60 million a year to increase overseas employees’ pay to the Washington, D.C., level. State officials believe that extending locality pay to overseas employees is the best way to address pay comparability issues with employees serving in Washington, D.C. As a short-term measure in the interim, the administration has approved and forwarded to Congress a supplemental retirement proposal to address, for those who are nearing retirement, the immediate problem of reduced retirement annuities due to service overseas. While these proposals could encourage overseas service, there are no assurances that they will fully address the problem of staffing hardship posts because all overseas Foreign Service employees would gain the same benefit and may continue to bid on assignments at nonhardship posts. The State Department has developed a pilot program that offers an additional financial incentive to employees accepting a 3-year tour in 41 designated hardship posts. This effort has begun to make a difference in a number of posts. Nonetheless, some employees choose not to remain at post for an additional year and thus forego the additional differential of 15 percent. Out of 173 positions that were eligible for the program in the 2001 assignments cycle, the first full year the program became operational, 127 employees (73 percent) signed up for a third year at posts that ordinarily require a 2-year tour. Based on State records, the program was estimated to cost about $1.8 million in fiscal year 2002. While many State officials with whom we met—in Washington and at the posts we visited—were enthusiastic about the new program, it appears that some of the more difficult hardship posts have not yet realized the benefits they had hoped the additional incentive might bring. For example, 10 employees in two China posts—Chengdu and Shenyang—extended their tours to take advantage of the new incentive. However, bureau officials noted that, even with the additional 15 percent differential offered as a recruiting incentive, Shenyang has no bidders for any of the six positions advertised in the current 2002 cycle; Chengdu had a few bidders, but none of them opted to take advantage of the incentive and sign up for an additional year. None of the staff assigned to two posts in Russia—Vladivostok and Yekaterinburg—has chosen an extended tour, and none of the employees newly assigned to these posts has opted for an additional year. In Kiev, about half of those eligible signed up for the program and extended their tours for a third year. In Lagos and Abuja, 16 percent of the employees who were eligible extended their tours in 2002, the first year that the program went into effect there. While several State officials in Washington suggest that service at hardship posts is favorably considered in various aspects of a Foreign Service officer’s career, such as promotions and onward assignments, many of the post staff with whom we met said they believe otherwise. However, State could not provide data on the extent to which hardship service is actually taken into account in such personnel decisions. The criteria that State’s selection boards use to determine promotion of Foreign Service officers do not explicitly require hardship service. However, the guidelines do state that an officer’s performance under unusually difficult or dangerous circumstances is relevant in evaluating whether an officer has the qualities needed for successful performance at higher levels. In addition, the guidelines do not require service abroad as a prerequisite for promotion, but they do encourage selection boards to consider an officer’s demonstrated competence in that regard. Ironically, some employees believe that hardship service could actually disadvantage them on promotion decisions. State officials also told us that service at hardship posts is generally considered in determining an employee’s next assignment, and a number of post management officials agreed that fair onward assignments are one way to reward employees for serving at hardship posts. However, many employees at several hardship posts we visited were not convinced that their service at a hardship assignment would necessarily be rewarded in determining their next assignment. Nonetheless, we noted that bidding instructions for junior officers do state that in filling heavily bid vacancies at popular nonhardship posts, priority and appropriate credit will be given to those serving at hardship posts. Bidding instructions for mid-level and senior positions do indicate that prior service at hardship posts is one of several factors considered in determining assignments, in addition to an employee’s language competence, rank, and functional expertise for the position. As part of its Diplomatic Readiness Initiative announced in January 2001, State has launched an aggressive recruiting program to rebuild the department’s workforce. According to State officials, the department is on track to meet its hiring goal of 465 new Foreign Service officers this fiscal year. As of March 2002, State reported hiring or committing to hire 344 new junior officers, 74 percent of State’s hiring target for this fiscal year. Under the Diplomatic Readiness Initiative, State requested a total of 1,158 new employees above attrition over the 3-year period from fiscal years 2002 to 2004. State officials, particularly those in Washington, D.C., believe that State’s hiring program will largely address the staffing shortage the department now faces as new entry-level junior officers advance to the mid-level ranks. However, it will take years before the new hires advance to the mid-level ranks, where State has reported experiencing its biggest staffing deficit. Moreover, as the influx of new employees advance to mid-level positions, they may also tend not to bid on hardship assignments. Although post officials were encouraged by the new hiring, a number of them were not clear as to whether and how the additional officers hired under the Diplomatic Readiness Initiative will address specific staffing shortfalls experienced at some hardship posts. A senior official in China told us that neither the geographic bureau nor the post has advance knowledge about the new recruits—posts in China can hope but have no assurances that there are enough recruits with some language skills to keep an adequate pool of language-trained staff in the pipeline. An officer in Nigeria noted that individuals with backgrounds in development work and humanitarian affairs, such as former Peace Corps volunteers and those who have worked with nongovernmental organizations, would be especially appropriate for many of the hardship posts in Africa; and for that reason, diversifying the pool of applicants to reach out to such groups is important. Human resources officials in Washington told us that State has embarked on an active outreach program that targets, for example, college campuses, professional associations, and other groups that offer a pool of potential applicants who are proficient in difficult languages and possess other knowledge, skills, and competencies the Foreign Service desires. In addition, they said State is intensifying overseas recruitment efforts. Although State has numerical hiring goals for broad occupational skill categories, State does not have numerical targets for specific skill requirements such as language or regional expertise. In general, the department recruits generalists with a broad range of skills, and they are later trained in specific areas to meet changing requirements. Thus, although State officials are optimistic that enough new hires are being brought in to address the overall staffing shortage, there are no assurances that the recruiting efforts will result in the right people with the right skills needed to meet specific critical shortfalls at some hardship posts. The State Department is facing serious staffing shortfalls at many of its posts, especially those designated hardship posts, and State’s system for assigning available staff has been ineffective in ensuring that overseas staffing requirements, particularly at strategic posts, are adequately addressed. In making assignment decisions, State attempts to strike a balance between matching the preferences, personal circumstances, and professional development goals of individual employees with the needs of the service. However, in an environment where the number of positions exceeds the number of staff to fill them, State is not able to ensure that staff are assigned where they are needed most. The new service need differential program holds some promise, but the extent to which it will address the problem of staffing hardship posts remains unclear. State believes that the department’s new hiring initiatives will gradually solve its current staffing problem. However, positions at hardship posts will continue to have fewer bids from qualified Foreign Service employees unless (a) adequate incentives are in place to entice these employees to bid on and accept assignments at hardship posts and (b) appropriate levers are used, when necessary, to assign experienced staff where they are most needed. Moreover, an assignment system that puts Foreign Service employees in the driver’s seat and does not systematically prioritize the posts and positions that must be filled does not ensure that State’s staffing requirements at hardship posts are adequately addressed. Without a comprehensive, strategic approach to marshaling and managing State’s human capital, there is little assurance that State will be able to place the right people in the right posts at the right time. As a result, diplomatic readiness could be at risk at hardship posts, many of them of significant importance to the United States. In light of our findings that State’s assignment system has not been effective in addressing staffing requirements at hardship posts, including many of strategic importance, we recommend that the Secretary of State: improve personnel and assignment data so that they will (1) allow State to fully assess its human capital capabilities and limitations and enhance the department’s workforce planning efforts, and (2) enable State to take a fact-based, performance-oriented approach to human capital management that would involve analyzing bidding and assignment data to determine its success in addressing staffing needs at all posts, including hardship posts and posts of strategic importance to the United States; rigorously and systematically determine priority positions that must be filled worldwide as well as positions that will not be filled during each assignments cycle, based on the relative strategic importance of posts and positions and realistic assumptions of available staff resources; consider a targeted hiring strategy, with measurable goals, designed to specifically address critical shortfalls, such as employees who are proficient in certain foreign languages; are interested in those particular positions, functional specialties, or career tracks that are in short supply; and are interested in serving in hardship locations; and develop a package of incentives and implement appropriate actions to steer employees toward serving at hardship posts. Such measures could include: 1. proposing a set of financial incentives to Congress that State believes will entice more employees to bid on and accept hardship positions based on analyses that estimate the costs and likelihood of increasing the number of Foreign Service employees who bid on assignments in the selected hardship posts; 2. making hardship service an explicit criterion for promotions and 3. employing more directive approaches to assignments as necessary to steer fully qualified employees toward hardship posts that require their skills and experience and to ensure that hardship assignments are shared equitably. The State Department provided written comments on a draft of our report. State’s comments, along with our responses to specific points, are reprinted in appendix VI. In general, State found our report to be very helpful. It acknowledged the difficulties the department faces in staffing hardship posts around the world and the negative effect that staffing problems have on these posts. State found our statistical findings, including our analyses of bidding and assignment patterns as well as the relative decline of hardship pay due to the lack of locality pay for employees assigned abroad, to be very useful. State indicated that it would implement two of our recommendations. The department said it will (1) study alternative ways to provide additional incentives for employees to serve at hardship posts, and (2) review the implementation of human resources data systems to enhance State’s reporting capabilities along the lines that we suggested. State did not indicate its position with regards to our two other recommendations—that State rigorously and systematically determine staffing priorities worldwide and consider a targeted hiring strategy. State attributes the problem of staffing hardship posts to the department’s staffing shortfall of 1,100 people, which the department is addressing through its Diplomatic Readiness Initiative. In addition to hiring more staff, a major thrust of State’s efforts is addressing the locality pay issue. While we acknowledge that these efforts would ease State’s overall staffing problem, both domestically and overseas, we do not believe that they would necessarily fully address the staffing requirements of hardship posts, including those of significant importance to the United States. We hold this opinion because staffing decisions made under State’s assignment system tilt the balance toward employee preferences, rather than the needs of the service. Although there will be more staff available to fill positions, it will take years before the new hires advance to the mid- level ranks where State has reported the largest deficit. Furthermore, as the new employees advance to mid-level positions, they may tend to bid on and be assigned to non-hardship posts unless State (1) hires people with the specific skills that are in short supply and who are inclined to serve in hardship posts and (2) puts in place appropriate levers to steer employees with the right skills and experience to serve in hardship posts. We do not believe that hardship posts should suffer disproportionately from staff shortages. Our recommendations, if implemented, would help ensure that the staffing needs of hardship posts, including those critical to U.S. interests, are met. We are sending copies of this report to appropriate congressional committees. We are also sending copies of this report to the Secretary of State. Copies will be made available to others upon request. If you or your staff have any questions about this report, please contact me on (202) 512-4128. Other GAO contacts and staff acknowledgments are listed in appendix VII. To assess the number, experience, and skills of staff in hardship positions and the potential impact on diplomatic readiness, we selected seven countries identified by State as strategically important to U.S. interests: China, Kazakhstan, Nigeria, Pakistan, Russia, Saudi Arabia, and Ukraine. We also visited seven hardship posts in three of the countries we examined—Beijing, Guangzhou, Shanghai, and Shenyang in China; Riyadh and Jeddah in Saudi Arabia; and Kiev, Ukraine—where we met with numerous post officials to obtain human resources data not available in headquarters and to assess the impact that staffing shortfalls may have on diplomatic readiness. To examine how well State’s assignment system is meeting the staffing requirements of hardship posts, we reviewed State’s policies, processes, and programs for filling hardship posts, as well as State’s open assignments manuals and other human resources documents. In addition, we analyzed the process, mid-level bidding data, and results of the 2001 assignments cycle, including fair share assignments; mid-level bidding data on the 2002 assignments cycle; and the assignment histories of 1,100 mid- level generalists hired between 1986 and 1991. We did not validate the accuracy of the data obtained from State. We also met with several offices within the Bureau of Human Resources; executive directors, post management, and human resources officials in five of the six regional bureaus; nine current and former ambassadors who have served in hardship posts; and representatives of the American Foreign Service Association. We analyzed bidding data to determine the average number of position bids by posts, the median average bid for each differential rate, and the areas of specialization that are difficult to staff. For these analyses, we used the mid-level bidding data for the 2001 and 2002 summer assignments cycles. The bidding data include the number of positions to be filled at each post and the number of bids received for each position. We used the mid-level bidding data because mid-level positions comprised 58 percent of the total Foreign Service workforce. We also used the bidding data for the summer assignments cycle because, according to State officials, most employees are transferred during this cycle, compared to the winter cycle. In addition, the analysis was limited to 2 years because State has bidding data for only the 2001 and 2002 cycles. Although we analyzed data for the two cycles, we provided information for only the 2002 cycle (see fig. 3) because the results for 2001 were similar: To obtain the average number of bids for each post, we took the total number of bids received on all positions at each post and divided it by the total number of positions to be filled at the post. For example, in the 2002 summer assignments cycle, Beijing had 12 positions to be filled and received a total of 53 bids, resulting in an average of 4.4 bids for this post. To obtain the median bid at each differential rate, as represented in the line in figure 3, we arranged in ascending order the average bid for each post at the corresponding differential rate and used the middle average bid. For example, assuming there are only 5 posts at the 25 differential rate and their average bids are 3, 5, 7, 9, and 16, the median of the average bids is 7. To measure how the hardship burden is shared by Foreign Service employees (fig. 4), we analyzed about 10,000 assignments of 1,100 mid- level generalists with 10 to 15 years of service. We performed the analysis by using the Lorenz curve, which is a methodology traditionally used to measure income inequality: First, we assigned weights to posts based on State’s level of differential (hardship) pay. State differential pay range from 5 percent to 25 percent of base pay. For example, we assigned 1.0 to a nonhardship post, 1.10 to a 10 percent hardship post, and 1.25 to a 25 percent hardship post. Next, we multiplied the number of days each mid-level generalist served at each post by the weighted post differential to obtain total hardship weighted days. We subtracted the total number of unweighted days served at all posts to obtain the number of hardship burden days for each generalist. The number of hardship burden days was divided by the number of career years served to obtain hardship burden per year per employee. The graph represents the ordering of employees from the lowest to the highest weighted hardship burden. In addition, we analyzed D.C. pay, which incorporates locality pay, versus overseas pay with differential rates to determine the effects of the locality rate on the relative value of overseas differential rates (fig. 5). For our analysis, we focused on a hypothetical Foreign Service officer at the FS-04 step 13 level, who would have had a base salary of $50,526 when locality pay was put in place in 1994. We then compared subsequent increases in pay for D.C. employees with pay increases for personnel at nonhardship posts and at posts with varying levels of differential rate. For the period from 1994 through 2002, we used historical data provided by the Office of Personnel Management. Based on these historical patterns and projections of increases in federal pay levels by the Office of Management and Budget, we assumed that D.C. pay increases will average 4 percent annually from 2003 to 2011 and that overseas pay increases will average 3 percent annually over that period because locality pay is not included in overseas pay. The overseas pay does not include other allowances such as education and housing, of which the value varies depending on the circumstances of the individual employee. We conducted our review from July 2001 to May 2002 in accordance with generally accepted government auditing standards. The authority to make assignments, which is granted to the Secretary of State, is delegated to the Undersecretary for Management. This authority is exercised through the Director-General of the Foreign Service, who is responsible for formulating and implementing personnel policies and programs. Under the direction of the Director-General and the Principal Deputy Assistant Secretary for Human Resources, the Director of the Office of Career Development and Assignments (HR/CDA) is responsible for assigning Foreign Service personnel resources throughout the State Department and overseas. The functions of HR/CDA are divided into four divisions: Senior Level, Mid-level, Entry Level, and Assignments. (Fig. 6 below illustrates the organization and functions of HR/CDA.) State policy is that Foreign Service employees are to be available to serve worldwide. Foreign Service personnel are assigned through an “open assignment system.” The current open assignment process was established in response to a directive issued from the Secretary of State in June 1975, which called for creating a more open, centrally directed assignment process. The system is designed to engage all Foreign Service employees directly in the assignment process by providing information on all position vacancies and giving them the opportunity to compete openly. According to HR/CDA, while a major element of the 1975 directive was to eliminate the right of a bureau or post to veto assignments, the mandate for HR/CDA to take bureau and post interests into account in making assignments was extended and strengthened. Prior to the start of the assignments cycle, the open assignments agreement is negotiated each year between management and the American Foreign Service Association to cover applications for positions represented by the association (bargaining unit positions). Based on State’s open assignments manual, management, for the purposes of transparency and efficiency, also applies the agreement to nonbargaining unit positions, such as the deputy chiefs of mission. State has two assignments cycles: summer and winter. State’s assignment process centers on the high-volume summer transfer season, which is when most Foreign Service employees assume their new assignments. The assignment process begins when approximately 3,500 Foreign Service employees who are eligible to be transferred from their current assignment each year receive a list of instructions and upcoming vacancies for which they may compete. Staff then must submit a list of those positions for which they want to be considered. In general, employees must bid on at least 6 positions and no more than 15; 6 of the bids should be at their grades and within their designated functional specialty (called “core” bids) and be in more than one bureau or geographic region. To encourage service at hardship posts, three bids on one-grade stretch assignments at 15 percent and above differential posts now may count among an employee’s core bids. The remaining nine bids may be on any other positions, including those outside of an officer’s specialty or for training, detail, and stretch assignments. There are other regulations that pertain to fair share and service at hardship posts, length of service in Washington, D.C., tandem couple procedures, and medical clearances. Employees also submit bids based on their preferences by indicating whether bids are high, medium, or low priority. This designation is shared with the panels but not with the bureaus or posts. After employees make their choices, most submit bids electronically to their career development officers, who review the bids for compliance with applicable rules and regulations. From this point forward, the process takes various paths depending upon an officer’s grade and functional specialty. Junior and certain senior positions are governed by different procedures, as are assignment categories including long-term training,hard-to-fill positions, and details to other agencies and organizations. Certain assignments/positions are determined early in the assignment process. Starting about 3 months into the summer assignment process (around the end of October), employees may be assigned to certain positions by a panel. These positions include at-grade fair share bidders at 15 percent or higher differential post, deputy chief of mission, principal officer of consulates, office director, positions at Special Embassy Program posts, long-term training, and other key positions. Fair share bidders also may be assigned to at-grade positions at differential posts, and to one-grade stretch positions at 15 percent or higher differential posts. When the regular assignment season begins in December, HR/CDA proceeds with at-grade assignments, where language requirements are met, and stretch assignments at 15 percent differential and most difficult- to-staff posts. Other stretch proposals are held until March. HR/CDA will continue to focus on the hard-to-fill positions, and by the middle of March of the following year civil service personnel can bid on Foreign Service hard-to-fill positions. Certain specified domestic and overseas positions cannot be filled without the agreement of the interested principal officer, assistant secretary, and/or the ambassador. These positions include deputy assistant secretaries, office managers for principal and assistant secretaries, deputy chiefs of mission, special assistant to the ambassador, and chief of mission office managers. The appropriate HR/CDA division, working through the assignment officers, consults with bureaus to define position requirements and to request names of preferred potential candidates. Slates of qualified candidates for policy-level positions (deputy chief of mission, deputy chief of mission/special embassy posts) are reviewed and approved by a special committee and submitted to the Director-General for selection. After a candidate is selected, the assignment officer or career development officer will bring the assignment to panel for approval. The mid-level employees comprise the majority of the Foreign Service staff. Generally, the process brings together the employee’s interests, represented by the career development officers, and the bureau’s interests, represented by the assignment officers. State Department officials stressed that it has become increasingly useful, and in some cases essential, for mid-level employees to make themselves known to their prospective supervisors when pursuing their next assignment. After all the bids are submitted, HR/CDA prepares a bid book, which lists the bidders for every projected job vacancy. All bureaus and posts receive a copy of the bid book, which represents the official start of what is referred to at State as the “meat market.” This is when the bureaus attempt to identify the most qualified bidders for jobs available. It is also when bidders start marketing themselves to secure their choice assignments. However, State employees told us that marketing or lobbying actually starts long before bids are submitted, adding that lobbying for a job is not easy for many people. Assignment decisions ultimately are made by panels within the Career Development and Assignments Office. According to State, panels apply a variety of criteria when considering applicants for a position, including transfer eligibility, language competence, rank, and functional specialty. In addition, panels consider and give varying weights to service need, employee and bureau preferences, employee career development and professional aspirations, special personal circumstances (such as medical limitations and educational requirements of family members), and prior service at hardship posts. Bureaus or individuals may appeal panel decisions to the Director-General. The mid-level panel makes roughly 60 percent of Foreign Service assignments. The assignment process for untenured junior (entry-level) officers is somewhat different than the process for mid-level and senior-level officers. While junior officers also submit bids that indicate their preferences, their assignments are directed by the Entry Level Division with little input from the posts or bureaus on which the employees bid. In fact, junior officers are strongly advised not to lobby the bureaus and posts in which they have an interest. According to State, the directed approach ensures maximum fairness in making assignments. The Entry Level Division proposes assignments to the assignments panel only after taking into account an officer’s preferences, language probation status, functional and geographic diversity, equities from prior service in hardship posts, timing, and other important factors. In addition, according to HR/CDA, while the list of bidders goes to the panel, the assignment is done “off panel.” Junior officers serve their first two tours overseas and are expected to serve in consular positions in either the first or second of these assignments, normally for a minimum of 1 year but not less than 10 months. The following tables summarize staffing data and some of the factors that affect staffing of hardship posts in each of the seven countries we examined. Information for the four countries we included in our review but did not visit—Kazakhstan, Nigeria, Pakistan, and Russia—was obtained from the regional bureaus in Washington, D.C., with input from post officials. Table 10 lists the countries in each region that had the most number of bids per position, on average, and the fewest bids. Table 11 lists the 259 diplomatic posts that State operates worldwide, by region and by country. For every post, the tour of duty, hardship differential pay, and any danger pay that may be applicable are shown. The list also shows the 41 posts that have been designated for a service need differential—an additional recruitment and retention incentive of 15 percent above base pay for those who agree to serve for a third year—and the 98 posts that State has designated most-difficult-to-staff. The following are GAO’s comments on the Department of State’s letter dated June 5, 2002. 1. We agree that hiring staff under the Diplomatic Readiness Initiative will enable State to fill more of its positions. However, unless other actions are taken, such as those we have recommended, certain hardship posts may continue to be disproportionately staffed with entry-level employees who may not have the right experience, training, and skills to perform their jobs effectively. Furthermore, it will take years for new employees to acquire the skills and experience required to fill the mid-level positions. In the meantime, State needs to ensure that hardship posts do not suffer disproportionately from State’s shortages of mid-level employees. 2. We acknowledge that entry-level employees are frequently assigned to hardship posts. Our concern is that entry-level employees are assigned to positions that require more experience and that they may not get the supervision and guidance they need from more experienced staff due to the shortage of mid-level officers at hardship posts. 3. Our work shows that State is having difficulty filling positions at hardship posts that are critical to U.S. interests with qualified, experienced staff. Based on our case studies, State’s assignment system does not necessarily ensure that staff are assigned to positions in locations where they are needed most. For example, as noted in our report, State had difficulties staffing public diplomacy positions in Saudi Arabia with experienced, Arabic-speaking officers. In China and Russia, many Foreign Service officers did not meet the language proficiency requirements for their positions. Moreover, State does not rigorously and systematically determine its worldwide staffing priorities. 4. In studying additional incentives for employees to serve at hardship posts, State needs to examine not only financial incentives but also nonfinancial incentives and other actions specifically designed to steer qualified employees toward hardship posts that require their skills and experience and to ensure that the burden of hardship service is shared equitably. These actions could include, for example, making hardship service an explicit criterion in promotion and onward assignment decisions and employing more directive approaches to assignments. Any financial incentives that State may propose should fully analyze the estimated costs associated with each option and assess how they will affect the likelihood of increasing the number of Foreign Service employees who bid on assignments at selected hardship posts. In addition to the person named above, Joy Labez, Barbara Shields, Phil McMahon, Melissa Pickworth, and Janey Cohen made key contributions to this report. Rick Barrett, Tim Carr, Martin De Alteriis, Mark Dowling, Jeffrey Goebel, Kathryn Hartsburg, Bruce Kutnick, Mike Rohrback, and Ray Wessmiller also provided technical assistance.
Foreign service employees often experience difficult environmental and living conditions while assigned to U.S. embassies and consulates that are designated as "hardship posts." These conditions include inadequate medical facilities, few opportunities for spousal employment, poor schools, high levels of crime, and severe climate. Because the State Department is understaffed, both in terms of the number and types of employees, it is difficult to ensure that it has the right people in the right place at the right time. The impact of these staffing shortfalls is felt most strongly at hardship posts, some of which are of strategic importance to the United States. As a result, diplomatic programs and management controls at hardship posts could be vulnerable and the posts' ability to carry out U.S. foreign policy objectives effectively could be weakened. State's assignment system is not effectively meeting the staffing needs of hardship posts. Although American Foreign Service employees are obligated to serve anywhere in the world, State rarely directs employees to serve in locations for which they have not shown interest by bidding on a position. Because few employees bid on these positions, State has difficulty filling them.
The main purpose of a foreign counterintelligence investigation is to protect the U.S. government from the clandestine efforts of foreign powers and their agents to compromise or to adversely affect U.S. military and diplomatic secrets or the integrity of U.S. government processes. At the same time, however, many of the foreign powers’ clandestine efforts may involve a violation of U.S. criminal law, usually espionage or international terrorism, which falls within the federal law enforcement community’s mandate to investigate and prosecute. As a result, foreign counterintelligence investigations often overlap with law enforcement interests. To provide a statutory framework for electronic surveillance conducted within the United States for foreign intelligence purposes, the Congress, in 1978, enacted the Foreign Intelligence Surveillance Act (FISA). The legislative effort emerged, in part, from the turmoil that surrounded government intelligence agencies’ efforts to apply national security tools to domestic organizations during the 1970s. For example, congressional hearings identified surveillance abuses within the United States by intelligence agencies that were carried out in the name of national security. FISA was designed to strike a balance between the government’s need for intelligence information to protect the national security and the protection of individual privacy rights. In 1994, the Congress amended the 1978 act to include physical searches for foreign intelligence purposes under the FISA warrant procedures. Within DOJ, various components have responsibilities related to the investigation and prosecution of foreign intelligence, espionage, and terrorism crimes. The Criminal Division has responsibility for developing, enforcing, and supervising the application of all federal criminal laws, except those specifically assigned to other divisions. Within the Criminal Division, the Internal Security Section and the Terrorism and Violent Crime Section have responsibility for supervising the investigation and prosecution of crimes involving national security. Among such crimes are espionage, sabotage, and terrorism. The Office of Intelligence Policy and Review (OIPR) is, among other things, to assist the Attorney General by providing legal advice and recommendations regarding national security matters and is to approve the seeking of certain intelligence-gathering activities. OIPR represents the United States before the Foreign Intelligence Surveillance Court (hereinafter, the FISA Court). OIPR prepares applications to the FISA Court for orders authorizing surveillance and physical searches by U.S. intelligence agencies, including the FBI, for foreign intelligence purposes in investigations involving espionage and international terrorism and presents them for FISA Court review. When evidence obtained under FISA is proposed for use in criminal proceedings, OIPR is to obtain the FISA- required advance authorization from the Attorney General. In addition, in coordination with the Criminal Division and U.S. Attorneys, OIPR has the responsibility of preparing motions and briefs required in U.S. district courts when surveillance authorized under FISA is challenged. The FBI is DOJ’s principal investigative arm with jurisdiction over violations of more than 200 categories of federal crimes, including espionage, sabotage, assassination, and terrorism. To carry out its mission, the FBI has over 11,000 agents located primarily in 56 field offices and its headquarters in Washington, D.C. Among its many responsibilities, within the United States, the FBI is the lead federal agency for protecting the United States from foreign intelligence, espionage, and terrorist threats. The FBI’s National Security and Counterterrorism Divisions are the units responsible for countering these threats. To accomplish their task, the National Security and Counterterrorism Divisions engage in foreign intelligence and foreign counterintelligence investigations. Within the Judicial Branch, FISA established a special court (the FISA Court). The FISA Court, as noted previously, has jurisdiction to hear applications for and grant orders approving FISA surveillance and searches. The FISA Court is comprised of seven district court judges from seven different districts who are appointed by the Chief Justice of the U.S. Supreme Court to serve rotating terms of no longer than 7 years. The Chief Justice also designates three federal judges from the district or appeals courts to serve on a Foreign Intelligence Surveillance Review Court. The Foreign Intelligence Review Court was established to rule on the government’s appeals of Foreign Intelligence Surveillance Court denials of government-requested surveillance and search orders. As noted previously, foreign counterintelligence and law enforcement investigations often overlap, but at the same time different legal requirements apply to each type of investigation. For intelligence and counterintelligence purposes, electronic surveillance and physical searches against foreign powers and agents of foreign powers in the United States are governed by FISA, as amended. FISA, among other things, contains requirements and a process for seeking electronic surveillance and physical search authority in investigations seeking to obtain foreign intelligence and counterintelligence information within the United States. For example, FISA permits surveillance only when the purpose of the surveillance is to obtain foreign intelligence information. FISA also requires prior judicial approval by the FISA Court for surveillance and searches. With respect to FBI foreign counterintelligence investigations, the FBI Director must certify, among other things, to the FISA Court that the purpose of the surveillance is to obtain foreign intelligence information and that such information cannot reasonably be obtained by normal investigative techniques. However, FISA also contains provisions permitting intelligence agencies to share with law enforcement intelligence information that they have gathered that implicates federal criminal violations. For federal criminal investigations, the issuance and execution of search warrants, for example, is generally governed by the Federal Rules of Criminal Procedure. In addition, electronic surveillance or wiretapping in criminal investigations is, in general, governed by title III of the Omnibus Crime Control and Safe Streets Act of 1968, as amended. The differing standards and requirements applicable to criminal investigations and intelligence investigations are evident with respect to electronic surveillance of non-U.S. persons where the requisite probable cause standard under FISA differs from that required in a criminal investigation. In criminal investigations, the issuance of court orders authorizing electronic surveillance must, in general, be supported by a judicial finding of probable cause to believe that an individual has committed, is committing, or is about to commit a particular predicate offense. In contrast, FISA, in general, requires that a FISA Court judge find probable cause to believe that the suspect target is a foreign power or an agent of a foreign power, and that the places at which the surveillance is directed is being used, or is about to be used, by a foreign power or an agent of a foreign power. To determine what key factors affected coordination between the FBI and the Criminal Division, we interviewed DOJ officials, including officials from the Office of the Deputy Attorney General, OIPR, the Criminal Division, the Division’s Internal Security and Terrorism and Violent Crime Sections, the Office of Inspector General, and the FBI’s National Security and Counterterrorism Divisions and Office of General Counsel. We also reviewed congressional committee reports and hearing transcripts regarding intelligence coordination issues and the DOJ Inspector General’s July 1999 unclassified report on intelligence coordination problems related to DOJ’s campaign finance investigation. In addition, we reviewed the classified report of the Attorney General’s Review Team on the FBI’s handling of its investigation at the Los Alamos National Laboratory. To determine what policies, procedures, and processes are in place for coordinating foreign counterintelligence investigations that indicate possible criminal violations within appropriate DOJ units, we reviewed applicable laws, Executive Orders 12139 on Foreign Intelligence Electronic Surveillance and 12949 on Foreign Intelligence Physical Searches, and copies of existing guidance provided by DOJ and the FBI. We interviewed Criminal Division, OIPR, and FBI officials to determine the pertinent coordination policies, procedures, and processes in effect and their views on their effectiveness. In order to provide you with an unclassified report, we agreed with the Committee not to review specific cases to try to identify instances of compliance or noncompliance with the 1995 coordination procedures. To determine what actions DOJ has taken to address identified coordination problems and what concerns and impediments, if any, remain, we reviewed certain legal requirements pertaining to disseminating and safeguarding information from foreign counterintelligence investigations and criminal investigations. For foreign counterintelligence investigations, we reviewed FISA, as amended; relevant federal court cases; Executive Order 12333 on United States Intelligence Activities; and Congressional Research Service reports. For criminal investigations, we reviewed sections of the United States Code and Federal Rules of Criminal Procedure; federal court cases; and news articles related to espionage prosecutions. In addition, we obtained and reviewed congressional committee reports and hearing transcripts regarding intelligence coordination issues. We also reviewed internal DOJ reports, as mentioned earlier, the DOJ Inspector General’s unclassified report on DOJ’s campaign finance investigation and the Attorney General’s Review Team’s classified report concerning the FBI’s Los Alamos National Laboratory investigation. Furthermore, we met with Criminal Division, OIPR, coordination working group, and FBI officials to discuss the proposed revisions to the July 1995 guidelines and any issues the working group was unable to resolve. During our review, decision memorandums containing recommendations concerning the coordination of FBI intelligence investigations with the Criminal Division, prepared by the coordination working group, remained draft internal documents. We were not provided and did not have the opportunity to review the working group’s documents. As such, our findings and conclusions relating to DOJ’s proposed actions and remaining impediments are based on testimonial evidence. To determine what mechanisms have been put into place to ensure compliance with intelligence coordination policies and procedures, we reviewed applicable OIPR and FBI internal policies and procedures. We also interviewed officials from the Office of Deputy Attorney General, including the then Principal Associate Deputy Attorney General in charge of the intelligence coordination working group, OIPR, and the Office of the Inspector General and FBI officials, including the the General Counsel and representatives of the FBI’s Inspection Division. We performed our work from May 2000 to May 2001 in accordance with generally accepted government auditing standards. In June 2001, we requested comments on a draft of this report from the Attorney General. On June 21, 2001, we received written comments from the Acting Assistant Attorney General for Administration. The comments are discussed on pages 32 and 33 and reprinted in appendix II. DOJ also provided technical comments, which we have incorporated where appropriate. A key factor impeding coordination of foreign counterintelligence investigations involving the use or anticipated use of the FISA surveillance and search tools has been the FBI’s and OIPR’s concern about the possible consequences that could result should a federal court rule that the line between an intelligence and a criminal investigation had been crossed due to contacts and/or information shared between the FBI and the Criminal Division. Specifically, FBI and OIPR were concerned over the consequences should a court find that the primary purpose of the surveillance or search had shifted from intelligence gathering to collecting evidence for criminal prosecution. While these concerns inhibited coordination, Criminal Division officials questioned their reasonableness and believe that they had an adverse effect on the strength of subsequent prosecutions. A further concern of FBI intelligence investigators, not necessarily related to the question of the primary purpose of the surveillance or search, has been the potential revelation of its sources and methods during criminal proceedings. The consequences about which the FBI and OIPR were concerned included the potential (1) rejection of the FISA application or the loss of a FISA renewal and/or (2) suppression of evidence gathered using FISA tools, which, in turn, might lead to loss of the criminal prosecution. According to OIPR officials, differences of opinion existed among OIPR, the Criminal Division, FBI, and other DOJ officials, regarding their perceptions of the likelihood that the FISA Court or another federal court might, upon review, find that the line between an intelligence and criminal surveillance or search had been crossed and, therefore, the primary purpose had shifted from intelligence gathering to a criminal investigation. Complicating the resolution of these differences has been DOJ’s disinclination to risk rejection of a FISA application or loss of a prosecution, for example, by requiring the FBI to more closely coordinate with the Criminal Division. The FBI has long recognized that the investigative tools FISA authorized were often the FBI’s most effective means to secure intelligence information. However, since the mid-1990s, FBI investigators, cautioned by OIPR, became concerned that their interaction with the Criminal Division regarding an investigation might result in the FISA Court denying a FISA application, the renewal of an existing FISA, or limit the FBI’s options to seek the use of the FISA tools at a later date should the FISA Court interpret these interactions as an indication that intelligence gathering was not, or no longer was, the primary purpose of the investigation. As a result, according to the Attorney General’s Review Team—the team established to review the FBI’s handling of the Los Alamos National Laboratory investigation—even in foreign counterintelligence investigations not involving FISA tools, the FBI and OIPR were reluctant to notify the Criminal Division of possible federal crimes as they feared such contacts could be detrimental should they decide to subsequently seek the use of FISA tools. According to an Associate Deputy Attorney General, resolving these concerns is complicated because DOJ’s interactions with the FISA Court take place during FISA proceedings before the court. Introducing new policies or procedures during an investigation for which the court was considering a FISA application or renewal (e.g., requiring greater coordination), might result in the FISA Court rejecting that FISA. The official also said that DOJ officials did not want to take such a risk. Contacts between FBI intelligence investigators and the Criminal Division may also raise concerns with respect to the preservation of certain evidence in criminal prosecutions. As noted earlier, FISA provides that evidence of criminal violations gathered during an intelligence investigation may be shared with law enforcement and, for example, used in a criminal prosecution. Under the primary purpose test, most courts have held that information gathered using the FISA tools may be used in subsequent criminal prosecutions only so long as the primary purpose of the FISA surveillance or search was to obtain foreign intelligence information. According to Criminal Division officials, since FISA’s enactment, no court using the primary purpose test has upheld a challenge to the government’s use of FISA-obtained intelligence information for criminal prosecution purposes. However, OIPR and FBI officials expressed concern that a federal court could determine that the primary purpose of the surveillance or search was for a criminal investigation, and, could potentially suppress any FISA evidence gathered subsequent to that time. According to Criminal Division officials, the FBI’s and OIPR’s more restrictive interpretation of what could be shared with the Criminal Division stemmed from the application of the judicially created primary purpose test, articulated prior to the enactment of FISA. Most federal courts have adopted the primary purpose test in post-FISA cases. Under this test, most federal courts have held that foreign intelligence information gathered using FISA tools may be used in subsequent criminal proceedings so long as the primary purpose of the FISA surveillance or search was to obtain foreign intelligence information. These officials suggested that the application of the primary purpose test had not raised potential coordination problems between the FBI and the Criminal Division until the Aldrich Ames case. In 1994, Aldrich H. Ames, a Central Intelligence Agency official, was arrested on espionage charges of spying for the former Soviet Union and subsequently Russian intelligence. The FISA Court authorized an electronic surveillance of the computer and software within the Ames’ residence. In addition, the Attorney General had authorized a warrantless physical search of the residence. At that time, FISA did not apply to physical searches. DOJ obtained a guilty plea from Ames who was sentenced to life in prison without parole. Criminal Division and FBI officials said that some in DOJ were concerned that, had the Ames case proceeded to trial, early and close coordination between the FBI and the Criminal Division might have raised a question as to whether the primary purpose of the surveillance and searches of Ames’ residence had been a criminal investigation and not intelligence gathering. According to these officials, had this question been raised, a court might have ruled that information gathered using the FISA surveillance and/or the warrantless search be suppressed, thereby possibly jeopardizing Ames’ prosecution. To date, this issue remains a matter of concern to the FBI and OIPR. OIPR officials indicated that while such a loss had not occurred because Ames had pleaded guilty, the fear of such a loss, nonetheless, was real. Criminal Division officials consider OIPR’s and FBI’s concern in the Ames case to be overly cautious. In their opinion, the coordination that occurred during the investigation had been carried out properly and, had the case been tried, any challenges to the evidence gathered would have been denied and the prosecution would have been successful. Moreover, with regard to FBI and OIPR concerns, Criminal Division officials said that they stemmed from an unduly strict interpretation of the primary purpose test. As noted earlier, the primary purpose test was articulated prior to FISA. Division officials cited the opinion of the Attorney General’s Review Team, which stated, in general, that FISA was not a codification of the primary purpose test and that FISA, itself, with all its attendant procedures and safeguards, was to be the measure by which such surveillance and searches were to be judged. While recognizing that the FBI’s and OIPR’s concerns were well-intentioned, Criminal Division officials said that as a result of these concerns the primary purpose test had been, in effect, interpreted by the FBI and OIPR to mean “exclusive” purpose. OIPR officials did not dispute this characterization of OIPR’s historical concerns relative to primary purpose. However, these officials said that OIPR’s current position regarding FBI and Criminal Division coordination was based on their understanding of the FISA Court’s position on the primary purpose issue relating to such coordination. As a result, Division officials contend that they have been unable to provide advice that could have helped the FBI preserve and enhance the criminal prosecution option. For example, the Division could advise the FBI on ways to preserve its intelligence sources against compromise during a subsequent criminal trial. Division officials further contend that their involvement in the investigation can help to ensure that the case the government presents for prosecution is the strongest it can produce. According to OIPR, whenever the government decides to pursue both national security and law enforcement investigations simultaneously, it may have to decide, in some instances, whether, or at what point, one of the investigations must be ended to preserve the integrity of the other. OIPR officials said that the possibility of intelligence sources and methods being exposed, if evidence gathered during an intelligence investigation is later used and challenged in a criminal prosecution, remains a concern of FBI investigators. If the intelligence source or method is deemed to be of great value, DOJ may have to decide whether protection of the source or method outweighs the seriousness of the crime and, accordingly, decline prosecution. As discussed previously, the primary legislation governing intelligence investigations of foreign powers and their agents in the United States is FISA. FISA also provides, however, that intelligence information implicating criminal violations may be shared with law enforcement. FISA further contains provisions to help maintain the secrecy of lawful counterintelligence sources and methods where such information is used in a criminal proceeding. Specifically, the act provided that where FISA information is used, introduced, or disclosed in a trial and the Attorney General asserts that disclosure of such information in an adversary hearing would harm the national security of the United States, the Attorney General may seek court review, without the presence of defense counsel, as to whether the surveillance or search was lawfully authorized and conducted. OIPR officials emphasized that while the act may provide for such a review, a judge may decide that the presence of defense counsel was necessary. Furthermore, officials asserted that, as a result, the presence of the defendant’s attorney raised the risk that classified information reviewed during the proceeding could be subsequently revealed, despite these proceedings being subject to security procedures and protective orders. Consequently, they added that intelligence investigators might be reluctant to share with the Criminal Division evidence of a possible federal crime that had been gathered during an intelligence investigation. Stemming, in part, from concerns raised over the timing and extent of coordination on the Aldrich Ames case, the Attorney General in July 1995 established policies and procedures for coordinating FBI foreign counterintelligence investigations with the Criminal Division. One purpose of the 1995 procedures was to ensure that DOJ’s criminal and counterintelligence functions were properly coordinated. However, according to Criminal Division officials and conclusions by the Attorney General’s Review Team, rather then ensuring proper coordination, problems arose soon after the Attorney General’s 1995 procedures were promulgated. As discussed, those problems stemmed from the FBI’s and OIPR’s concerns about the possible consequences that could damage an investigation or prosecution should a court make an adverse ruling on the primary purpose issue. In January 2000, the Attorney General promulgated coordination procedures, which were in addition to the 1995 procedures. These procedures were promulgated to address problems identified by the Attorney General’s Review Team during its review of the FBI’s investigation of the Los Alamos National Laboratory. Criminal Division officials believed that the 2000 procedures had helped to improve coordination, especially for certain types of foreign counterintelligence investigations. According to DOJ officials, following the conviction of Aldrich Ames, OIPR believed that the close relationship between the FBI and the Criminal Division had been near to crossing the line between intelligence and criminal investigations, thereby risking a decision against the government if a court had applied the primary purpose test. To address the concerns raised, in part, by the FBI’s contacts with the Criminal Division in the Ames case, the Attorney General promulgated coordination procedures on July 19, 1995. The purposes of the 1995 procedures were to establish a process to properly coordinate DOJ’s criminal and counterintelligence functions and to ensure that intelligence investigations were conducted lawfully. To accomplish its coordination purpose, the 1995 procedures, among other things, established criteria for when and how contacts between the FBI and the Criminal Division were to occur on foreign counterintelligence investigations. The procedures identify the circumstances under which the FBI was to notify the Criminal Division and set forth procedures to govern subsequent coordination that arises from the initial contact. In investigations involving FISA, the notification procedures established criteria that “If in the course of an… investigation utilizing electronic surveillance or physical searches under the Foreign Intelligence Surveillance Act…facts or circumstances are developed that reasonably indicate that a significant federal crime has been, is being, or may be committed, the FBI and OIPR each shall independently notify the Criminal Division.” Following the Criminal Division’s notification, the procedures require the FBI to provide the Criminal Division with the facts and circumstances, developed during its investigation that indicated significant criminal activity. After the initial notification, the FBI and the Criminal Division could engage in certain substantive consultations. The procedures allowed the Criminal Division to provide the FBI guidance to preserve the criminal prosecution option; however, the procedures also established limitations on consultations between the FBI and the Criminal Division. To protect the intelligence purpose of the investigation, the procedures limited the type of advice the Criminal Division could provide the FBI in cases employing FISA surveillance or searches. Specifically, the procedures prohibited the Division from instructing the FBI on the operation, continuation, or expansion of FISA surveillance or searches. Additionally, the FBI and the Criminal Division were to ensure that the Division’s advice did not inadvertently result in either the fact or appearance of the Division directing the foreign counterintelligence investigation toward, or controlling it for, law enforcement purposes. Criminal Division officials indicated that they believed the procedures permitted the Division to advise the FBI on ways to preserve or enhance evidence for subsequent criminal prosecutions. The officials said that the Criminal Division might be able to advise the FBI on ways to preserve its intelligence sources, for example, by utilizing other sources to develop the information needed in a prosecution without risking the revelation of its more valuable sources. Moreover, the Criminal Division may also be able to advise the FBI on ways to enhance the evidence needed for prosecution, for example, by developing information that is needed to prove the elements of a criminal offense. “It is critical that the value of the FBI’s most sensitive and productive investigative techniques not be affected by their use for purposes for which they were not principally intended. Careful coordination in these matters by is essential in order to avoid the inappropriate characterization or management of intelligence investigations as criminal investigations, the potential devaluation of intelligence techniques, or the loss of prosecutive opportunities.” According to information provided by FBI officials, after issuance of the procedures, agents received training on them. The FBI’s Office of General Counsel developed presentations, which according to FBI officials, were provided to both new agent trainees at the FBI’s Quantico, VA, training facility and to experienced special agents. Additional training on the procedures continued in subsequent years and, on occasion, agents were sent reminders on the importance of reporting evidence of significant federal crimes to FBI headquarters so that it could properly coordinate them with the Criminal Division. According to the Attorney General’s Review Team’s report, almost immediately following the implementation of the Attorney General’s 1995 procedures, coordination problems arose. Rather than ensuring that DOJ’s criminal and counterintelligence functions were properly coordinated, as intended, the implementation and interpretation of the procedures triggered coordination problems. Those problems stemmed from concerns FBI and OIPR officials had over the possible legal consequences, discussed above, should the FISA Court or another federal court rule that the primary purpose of the surveillance or search was for criminal investigation purposes rather than intelligence gathering. According to Criminal Division officials, coordination of foreign counterintelligence investigations dropped off significantly following the implementation of the 1995 procedures. The Attorney General’s Review Team reported and Criminal Division officials confirmed that when the FBI did notify the Criminal Division about its foreign counterintelligence investigations, the notifications tended to occur near the end of the investigation. As a result, during the investigations the Division would have been playing little or no role in decisions that could have affected the success of potential subsequent criminal prosecutions. An FBI official acknowledged that soon after the implementation of the Attorney General’s 1995 procedures, coordination concerns surfaced. According to the official, after the FBI contacted OIPR about an investigation that needed to be coordinated with the Criminal Division, OIPR would determine whether and when such coordination should occur. Moreover, according to OIPR and FBI officials, when OIPR did permit coordination to take place, it participated in the meetings to help ensure that the contacts between the agents and the prosecutors did not jeopardize the primary intelligence purpose of the FISA’s search and surveillance tools. Thus, OIPR became the gatekeeper for complying with the 1995 procedures. While the 1995 procedures allowed OIPR to participate in consultations between the FBI and the Criminal Division, the procedures did not set out a gatekeeper role for OIPR. Moreover, the procedures permitted the Criminal Division to provide the FBI guidance aimed at preserving its criminal prosecution option. Subsequently, DOJ established working groups in 1996 and again in 1997 to address coordination problems and the issues underlying FBI, OIPR, and Criminal Division concerns. But, they were unsuccessful in resolving the concerns. Remedial actions to address the coordination issues were not taken until, as discussed below, (1) another working group was established in August 1999, specifically to address the coordination of intelligence information among the FBI, OIPR, and the Criminal Division and (2) the Attorney General’s Review Team submitted interim recommendations to the Attorney General in October 1999. In January 2000, based on the Attorney General’s Review Team’s interim recommendations, the coordination working group recommended to the Attorney General additional procedures to address the FBI/Criminal Division coordination issues. These procedures were designed to stimulate increased communication between the FBI and the Criminal Division for investigations that met the notification criteria contained in the 1995 procedures. In January 2000, the Attorney General approved these procedures. These procedures, in part, required the FBI to provide the Criminal Division copies of certain types of foreign counterintelligence case summary memorandums involving U.S. persons. In addition, the procedures established a briefing protocol whereby, monthly, FBI National Security Division and Counterterrorism Division officials judgmentally were to select cases that they believed to be their most critical and brief the Principal Associate Deputy Attorney General and the OIPR Counsel on them. These officials together formed what DOJ officials termed a “core group.” During these “core group critical-case briefings,” Criminal Division officials were to be briefed on those cases that the core group agreed met the criteria established in the 1995 procedures for Criminal Division notification. According to FBI officials, one criterion used to decide which cases to include in the critical-case briefings was whether a suspected felony violation was involved. The briefing protocol also established procedures for subsequent briefings of pertinent Criminal Division section chiefs and allowed for the Criminal Division to follow up with the FBI in those critical cases that the Division believed it needed more information. According to OIPR and Criminal Division officials, OIPR maintained its gatekeeper role at these briefings. However, in October 2000, core group meetings and the briefing protocol were discontinued. According to DOJ officials, the briefings were discontinued because some participants believed that these briefings somewhat duplicated sensitive-case briefings that the FBI provided quarterly to the Attorney General and Deputy Attorney General. Appendix I provides a chronology of key events related to the coordination issue. Subsequent to its 1999 interim recommendations, the Attorney General’s Review Team, in May 2000, issued its final report to the Attorney General. In its report, the Review Team raised additional coordination issues and provided recommendations to resolve them. To address these issues and recommendations, the coordination working group developed a decision memorandum in October 2000, for the Attorney General’s approval. According to working group officials, the memorandum recommended revisions to the 1995 procedures and included decision options for consideration for the issues on which the working group could not reach agreement, including an option advocated by the Office of the Deputy Attorney General. The primary issue on which the coordination working group could not agree reflects differences of opinion among the Criminal Division, OIPR, and the FBI as to what advice the Division may provide the FBI without jeopardizing either the intelligence investigation or any resulting criminal prosecution. This issue reflects the same underlying concern—judicially acceptable contacts and information sharing between the FBI and the Criminal Division—that affected proper implementation of the 1995 procedures and earlier disagreements over coordination in foreign counterintelligence FISA investigations. As of the completion of our review, no decision on the memorandum had been made. Thus, issues addressed in the memorandum remain. These include the advice issue and varying interpretations of whether certain criminal violations are considered “significant violations” that would trigger the Attorney General’s coordination procedures, as well as other issues. Another issue identified that could impede coordination, but was not addressed in the memorandum, is the adequacy and timeliness of the FBI’s case summary memorandums. In May 2000, the Attorney General’s Review Team sent to the Attorney General its final report on and recommendations to address problems identified during its review of the FBI’s investigation of possible espionage at the Los Alamos National Laboratory. To address those problems dealing with coordination between the FBI and the Criminal Division, the established coordination working group, which was led by the Principal Associate Deputy Attorney General and included representatives from FBI, OIPR, and the Criminal Division, was given responsibility to review the report and the Review Team’s recommendations. In addition to the Review Team’s report, the coordination working group considered intelligence coordination issues raised in the DOJ Office of Inspector General’s report on DOJ’s campaign finance investigation. On the basis of its deliberations, the coordination working group developed a decision memorandum and sent it to the Attorney General for approval in October 2000. According to working group officials, the group was able to reach consensus on most issues. For example, these officials said that the group had agreed to recommend that for clarity the reference to the phrase “significant federal crime” in the 1995 procedures be changed to “federal felony,” since they believed that the term “significant” was too ambiguous and that the term “felony” would be open to less interpretation as the particular elements comprising any particular felony violation are set out in statute. The working group officials told us that on issues on which the group could not reach consensus, the memorandum presented options, including an option advocated by the Office of the Deputy Attorney General. Specifically, working group officials indicated that the group could not reach a consensus regarding the permissible advice the Criminal Division should be allowed to provide to intelligence investigators. Although the working group agreed that the Criminal Division should play an active role in foreign counterintelligence investigations employing FISA tools, it could not agree on the type of advice the Criminal Division should be allowed to provide. For example, OIPR officials indicated that they believed that the FISA Court held a restrictive view on the issue of notification and advice and that this view would affect the FISA Court’s decisions to authorize a FISA surveillance or search. In contrast, a working group official said that the Criminal Division and Attorney General’s Review Team held less restrictive views on the notification and advice issues. Criminal Division officials said that FISA did not prohibit contact between investigators and prosecutors. They said that it was inconceivable that the Division should be left in the dark in these cases, which they characterized as being of extraordinary importance. They argued that in these cases effective coordination was important to develop the best case possible to bring to prosecution. In its report, the Attorney General’s Review Team asserted that there should be little restriction on the advice the Criminal Division should be allowed to provide. The working group left the matter for the Attorney General to decide. After the Attorney General took no action on the memorandum between October and December 2000, the working group again reviewed their positions for possible areas of consensus and made minor changes to the memorandum, which they resubmitted to the Attorney General in December. Since the basic positions of the working group participants did not change materially, the outstanding issues remained areas of disagreement. The Attorney General did not make a decision on the recommendations before leaving office on January 20, 2001. In March 2001, the decision memorandum was sent to the Acting Deputy Attorney General for the Attorney General’s decision. On the basis of the Acting Deputy Attorney General’s review, a new core group process was implemented. As of the completion of our review, no other action had been taken on the memorandum or the recommendations therein. Despite reported improvements in coordination between intelligence investigators and criminal prosecutors, in part, as a result of the implementation of the January 2000 procedures, several of the same coordination impediments remain. Some of these impediments stemmed from the longstanding differences of opinion regarding possible adverse judicial interpretations of what might be acceptable contacts and information sharing between the FBI and the Criminal Division. Also, Criminal Division officials expressed some concerns regarding the case summary memorandums provided by the FBI. Despite the efforts of the coordination working group, differences of opinion remained regarding the possible consequences of potential adverse judicial interpretation of the notification of the Criminal Division and the type of advice it may provide without crossing the line between an intelligence investigation and a criminal investigation. Furthermore, since the Attorney General had not approved the memorandum, the working group’s recommendation to clarify language in the 1995 procedures that trigger the Criminal Division’s notification was not implemented and, therefore, that issue remains. OIPR, FBI, and Criminal Division officials have continued to strongly differ in their interpretation as to when the Criminal Division should be notified of FBI intelligence investigations involving suspected significant federal crimes, and what type of advice the Criminal Division is permitted to provide FBI intelligence investigators without compromising the primary purpose of the intelligence surveillance or search (i.e., risk losing a FISA application or renewal, or future FISA request). Specifically, the issue revolved around the officials’ different perceptions of how restrictively the FISA Court might interpret Criminal Division notification or any subsequent advice the Division may provide. Working group officials indicated that the pertinent parties continued to disagree on procedural issues, such as the type of the advice that the Criminal Division should be allowed to give. For example, a working group official suggested that numerous categories of the types of advice the Criminal Division can provide could be created. However, such distinctions made it difficult to determine what advice under which circumstances could be provided without risking the loss of FISA authority. According to working group officials, these differences were left unresolved in the December 2000 decision memorandum. In addition, the language indicating when the Criminal Division is to be notified remained an issue. Although the working group’s December 2000 memorandum recommended clarifying the language in the 1995 memorandum which triggered the Criminal Division’s notification by changing the term “significant federal crime” to “federal felony,” the significant federal crime language remains in effect without the Attorney General’s approval. OIPR officials said that the coordination working- group members had agreed to the proposed change in language in order to make it clearer when the Criminal Division was to be notified. Although the working group members agreed, our interviews with some FBI officials, responsible for recommending that the Criminal Division be notified, indicated that they continued to use the significant threshold and that there were still disagreements as to its meaning. For example, FBI Counterterrorism Division officials told us that there still were disagreements over what constituted significant, and, therefore, differences of opinion as to when the Criminal Division should be notified. The officials said that these differences might have to be resolved at the highest levels of DOJ and the FBI. These FBI officials remained cautious regarding contacts between FBI intelligence investigators and the Criminal Division, preferring a higher threshold. Although addressed in the working group’s memorandum, this issue remains pending action by the Attorney General. According to Criminal Division officials, while the 2000 procedures had increased intelligence coordination, questions and concerns remained regarding the adequacy of FBI case summary memorandums for the Criminal Division’s purposes and the timeliness of the memorandums. Criminal Division officials said that they had questions as to whether some FBI case summary memorandums were sufficiently comprehensive to indicate criminal violations. They said that while it is relatively easy to discern from some FBI case summary memorandums whether criminal violations have been committed, in others it is not. OIPR officials also noted that FBI case summary memorandums were not always clear from the way they were written as to whether intelligence investigators had reason to believe that the criteria established by the Attorney General’s 1995 guidelines for notification had been triggered. According to the Criminal Division and OIPR officials, the case summary memorandum format does not require agents to address whether or not a possible criminal violation was implicated or contain a specific section for doing so. Criminal Division officials also asserted that for their purposes the case summary memorandums were not always timely. Criminal Division officials indicated that there could be a significant time lag between the time when a significant criminal violation was revealed or investigative actions in a case occurred and when the memorandums were provided to the Division. They added that the timeliness of the memorandums could be a problem, because events can often overtake an investigation. For example, the officials said that should an investigative target be planning to go overseas, the Criminal Division would like to have information in a timely manner so that it can assess its prosecutorial equities against the risk that the target may flee the country. Division officials said that the Division only receives the initial memorandums within 90 days after the investigation had been opened and, subsequently, annually thereafter. Thus, the memorandums the Criminal Division receives may not be timely enough to protect its prosecutorial equities in a case. No matter what impediments remain, the question exists as to how and how often has the lack of timely coordination adversely affected DOJ prosecutions. In its report on the FBI’s handling of the Los Alamos National Laboratory investigation, the Attorney General’s Review Team found that, by not coordinating with the Criminal Division at an earlier point, the FBI’s intelligence investigation might have been harmed and that had the Criminal Division been allowed to provide advice it could have helped the FBI to better develop its case. Since the 1995 guidelines were implemented, for those intelligence investigations of which they were aware, Criminal Division officials were able to identify one other case in which the prosecution may have been impaired by poor and untimely coordination. Regardless of the number of prosecutions that may have been adversely affected by poor or untimely coordination, Division officials argued that due to the significance of these types of cases, it was important that the strongest cases be developed and brought forward for prosecution. The officials said that the practical effect of not being involved during an investigation is that the Criminal Division was not aware of interviews conducted or approaches made, such as certain types of undercover operations, that could have helped make sure the prosecutorial equities were preserved or enhanced. Moreover, commenting on the adverse effects of being informed about investigations at the last minute, the officials said that it takes time to prepare cases for prosecution. They indicated that being informed of an investigation at the last minute could be problematic because it takes more than 2 or 3 days to prepare search warrants or obtain orders to freeze assets. In addition to the impediments noted above, Criminal Division officials continued to question whether all investigations that met the criteria of the 1995 procedures were being coordinated. Such concerns indicate that an oversight mechanism to help ensure compliance with the Attorney General’s 1995 coordination procedures was lacking. Office of the Deputy Attorney General and FBI officials acknowledged that, historically, no mechanisms had been created to specifically ensure compliance with the Attorney General’s 1995 procedures. Recently, two mechanisms have been created to help ensure Criminal Division notification. However, both mechanisms lacked written policies or procedures to institutionalize them and help ensure their perpetuation. Criminal Division officials said that while they knew which investigations were being coordinated, they did not know whether any existed about which they were not being notified. Furthermore, Division officials said they were still concerned that the FBI and OIPR might not notify the Division or provide the Division with the information in sufficient time for it to provide appropriate advice to the investigation or protect its prosecutorial equities in the case. Division officials also questioned whether foreign counterintelligence investigations involving possible federal criminal violations were being closed without the Criminal Division being notified and, thereby, potentially affecting the Division’s ability to exercise its prosecutorial equities in those cases. These concerns indicate that an oversight mechanism to ensure compliance with the Attorney General’s coordination procedures was lacking. Historically, DOJ had not developed oversight mechanisms specifically targeted at ensuring compliance with the 1995 requirements for notification. DOJ officials noted that ordinarily, DOJ expects components to comply with the Attorney General’s directives. According to the former Principal Associate Deputy Attorney General, no mechanism existed to provide systematic oversight of compliance with the notification procedures. Other than its normal oversight of investigations, such as periodic supervisory case reviews and reviews of FISA applications, the FBI did not have a specific or independent oversight mechanism that routinely checked whether FBI investigations complied with the 1995 procedures. FBI Inspection Division officials said that every 3 years the Inspection Division is to review the administration and operation of FBI headquarters and field offices, including whether or not policies and guidelines were being followed. The officials said that in the course of field offices inspections, certain aspects of investigations employing FISA surveillance or searches are reviewed, including whether the applications were properly prepared and accurately supported and whether there were appropriate field office administrative checks of the process. However, the Inspection officials said that, where such investigations had detected possible criminal violations, compliance with the Attorney General’s coordination procedures was not an issue that Inspection reviewed. Thus, the FBI had no assurance that foreign counterintelligence investigations that met the criteria for notification established by the 1995 procedures were being coordinated with the Criminal Division. Since mid-2000, two new mechanisms have been created to help better ensure that FBI foreign counterintelligence investigations meeting the Attorney General’s requirements for notification are coordinated with the Criminal Division. First, in mid-2000, OIPR implemented a practice aimed at identifying from FBI submitted investigation summaries those investigations that met the notification criteria established in the 1995 procedures. Then, in April 2001, DOJ reconstituted the core group and gave it a broader role in overseeing coordination issues and in better ensuring Criminal Division notification. However, these mechanisms have not been institutionalized in writing and, thus, their perpetuation is not ensured. Federal internal control standards require that internal controls be documented. OIPR officials said that, based in part on the Attorney General’s Review Team’s findings and to ensure greater compliance with the 1995 procedures, OIPR managers began emphasizing at weekly meetings with OIPR attorneys, and in a February 2001 e-mail reminder to them, the importance of coordinating relevant intelligence investigations with the Criminal Division. According to OIPR officials, OIPR attorneys were instructed that when they reviewed FBI FISA applications, case summary memorandums, or other FBI communications, they were to be mindful of OIPR’s obligation to identify and report to the Criminal Division FBI investigations involving appropriate potential violations. When the OIPR attorneys identify FBI investigations in which there is evidence of violations that meet the criteria established in the 1995 guidelines, they are to notify OIPR management. Management then is to contact both the FBI and the Criminal Division to alert them that in OIPR’s opinion, the notification requirement had been triggered. Then, whenever the FBI and the Criminal Division meet to coordinate the intelligence investigation, OIPR attends to help ensure that the primary purpose of the surveillance or search is not violated. OIPR officials believed that its practice has been working well. In commenting on improved coordination, both the Criminal Division Deputy Assistant Attorney General responsible for intelligence matters and the Chief of the FBI’s International Terrorism Section noted instances where OIPR had contacted them to alert them to investigations that met the criteria established by the Attorney General’s coordination procedures. As of April 2001, the Criminal Division Deputy Assistant Attorney General estimated that since OIPR had initiated its practice, it had contacted the Division about approximately a dozen FBI investigations that OIPR believed met the Attorney General’s requirements for notification. In April 2001, the acting Deputy Attorney General decided to reconstitute the core group and to give it a broader role for overseeing coordination issues. The core group, similar to the prior core group, is comprised of several officials from the Office of Deputy Attorney General, an official representing the Office of Intelligence Policy and Review, and the Assistant Directors of the FBI’s National Security and Counterterrorism Divisions. Whereas the previous core group’s role was to decide which of the FBI’s most critical cases met the requirements of the Attorney General’s coordination procedures and needed to be coordinated with the Criminal Division, the new core group’s role is broader. According to an Associate Deputy Attorney General and core group member, the new group is to be responsible for deciding whether particular FBI investigations meet the requirements of the coordination procedures and to identify for the Attorney General’s attention any cases involving extraordinary situations where compliance with the guidelines requires the Attorney General’s consideration. According to the Associate Deputy Attorney General, the FBI is to bring to the core group’s attention any investigation in which it is not clear that the Attorney General’s procedures have been triggered. For example, during an FBI investigation should it not be clear whether a criminal violation should be considered a significant federal crime, as indicated in the procedures, the FBI is to bring the matter to the core group for resolution. Thus, this is a much broader scope of responsibility than the prior core group’s which only considered the need for coordination in those critical cases that were judgmentally selected by the FBI. Furthermore, the core group also is to be responsible for identifying for the Attorney General’s attention those extraordinary situations where the FBI believes there may be good reason not to notify the Criminal Division. For extraordinary situations, the Associate Deputy Attorney General opined that it was expected that the number of such questions brought to the core group would be extremely few. While both mechanisms, if implemented properly, should help to ensure notification of the Criminal Division, neither mechanism has been written into policies or procedures. OIPR’s Counsel pointed out that while OIPR would try to ensure better coordination by employing this practice, it was not a part of OIPR’s mission. OIPR’s priority was to make sure that the FBI had what it needed to protect national security. She added that ensuring coordination could not be a priority for OIPR without additional attorney resources. OIPR’s Counsel further said that OIPR frequently has had its hands full trying to process requests for FISA surveillance and searches without having to worry about the criminal implications of those cases. She noted that over the last few years, the FBI has received a significant number of additional agent resources and had increased its efforts to combat terrorism, espionage, and foreign intelligence gathering. As a result, FISA requests had increased significantly, while OIPR resources needed to process those requests had not kept apace. While the practice may be working well to date, the practice has not been put into writing and, thus, has not been institutionalized. On the basis of our conversations with OIPR, the Criminal Division, and FBI officials, the extent to which OIPR has allowed coordination and advice to occur, currently and in the past, has varied depending upon the views and convictions of the Counsel responsible for OIPR at the time. As OIPR’s coordination practices have varied over the years, the perpetuation of the current practice could depend on future Counsels’ views on the coordination issue and, more importantly, how restrictively they believe the FISA Court views coordination with the Criminal Division. Likewise, the core group has not been institutionalized. Although at the time of our review it had met on two occasions since its creation, according to the Associate Deputy Attorney General there has been no written documentation establishing the core group or defining its role and responsibilities. Federal internal control standards require that internal controls need to be clearly documented. Furthermore, these standards require that such documentation appear in management directives, administrative policies, or operating manuals. Differing interpretations within DOJ of adverse consequences that might result from following the Attorney General’s 1995 coordination procedures for counterintelligence investigations involving FISA surveillance and searches have inhibited the achievement of one of the procedures’ intended purpose—to ensure that DOJ’s criminal and counterintelligence functions were properly coordinated. These interpretations resulted in less coordination. Additional procedures implemented in January 2000, requiring the sharing of certain FBI investigative case summaries, creating a core group, and instituting the core group critical-case briefing protocol helped to improve the situation by making the Criminal Division aware of more intelligence investigations with possible criminal implications. Subsequently, the core group and the critical-case briefing protocol were discontinued. However, in April 2001, a revised core group was created with a broader coordination role. It is too early to tell how effective a mechanism the new core group process will be for overseeing the requirement for notification. Nevertheless, other impediments remain. The differing interpretations comprise the main impediment to coordination. Intelligence investigators fear that the FISA Court or another federal court could find that the Criminal Division’s advice to the investigators altered the primary intelligence purpose of the FISA surveillance or search. Such a finding could lead to adverse consequences for the intelligence investigation or the criminal prosecution. As such cases involve highly sensitive national security issues, this is no small matter and caution is warranted. However, this longstanding issue has been reviewed at high-levels within DOJ on multiple occasions and Criminal Division officials believe the concerns, while well intentioned, are overly cautious given the procedural safeguards FISA provides. While the problems underlying the lack of coordination have been identified, the solutions to these problems are complex and involve risk. These solutions require balancing legitimate but competing national security and law enforcement interests. On the one hand, some risk and uncertainty will likely remain regarding how the FISA Court or another federal court might upon review interpret the primary purpose of a particular surveillance or search in light of notification of the Criminal Division and the subsequent advice it provided. On the other hand, by not ensuring timely coordination on these cases, DOJ may place at risk the government’s ability to bring the strongest possible criminal prosecution. Therefore, a decision is needed to balance and resolve these conflicting national security and law enforcement positions. Beyond resolving these differences, DOJ and the FBI can take several actions to better ensure that possible criminal violations are identified and reported and that mechanisms to ensure compliance with the notification requirements of Attorney General’s 1995 procedures are institutionalized. Such actions could facilitate the coordination of DOJ's counterintelligence and prosecutorial functions. To facilitate better coordination of FBI foreign counterintelligence investigations meeting the Attorney General’s coordination criteria, we recommend the Attorney General establish a policy and guidance clarifying his expectations regarding the FBI’s notification of the Criminal Division and types of advice that the Division should be allowed to provide the FBI in foreign counterintelligence investigations in which FISA tools are being used or their use anticipated. Further, to improve coordination between the FBI and the Criminal Division by ensuring that investigations that indicate a criminal violation are clearly identified and by institutionalizing mechanisms to ensure greater coordination, we recommend that the Attorney General take the following actions: 1. Direct that all FBI memorandums sent to OIPR summarizing investigations or seeking FISA renewals contain a section devoted explicitly to identifying any possible federal criminal violation meeting the Attorney General’s coordination criteria, and that those memorandums of investigations meeting the criteria for Criminal Division notification be timely coordinated with the Division. 2. Direct the FBI Inspection Division, during its periodic inspections of foreign counterintelligence investigations at field offices, to review compliance with the requirement for case summary memorandums sent OIPR to specifically address the identification of possible criminal violations. Moreover, where field office case summary memorandums identified reportable instances of possible federal crimes, the Inspection Division should assess whether the appropriate headquarters unit properly coordinated with the Criminal Division those foreign counterintelligence investigations. 3. Issue written policies and procedures establishing the roles and responsibilities of OIPR and the core group as mechanisms for ensuring compliance with the Attorney General’s coordination procedures. In written comments on a draft of this report, the Acting Assistant Attorney General for Administration responding for Justice responded that on two of our recommendations, the Department has taken full or partial action. Concerning our recommendation to institutionalize OIPR’s role and responsibilities for ensuring compliance with the Attorney General's coordination procedures, the Acting Counsel for Intelligence Policy on June 12, 2001, issued a memorandum to all OIPR staff. That memorandum formally articulated OIPR’s policy of notifying the FBI and the Criminal Division whenever OIPR attorneys identify foreign counterintelligence investigations that meet the requirements established by the Attorney General for coordination. We believe this policy should help perpetuate OIPR’s mechanism for ensuring compliance with the 1995 coordination procedures beyond any changes in OIPR management. Moreover, establishing a written policy places the Department in compliance with the documentation standard delineated in our “Standards for Internal Control in the Federal Government.” Concerning our recommendation regarding the FBI’s Inspection Division, the Deputy Attorney General directed the FBI to expand the scope of its periodic inspections in accord with our recommendation or explain why it is not practical to do so and, if not, to suggest alternatives. While this is a step in the right direction, full implementation of the recommendation will depend on whether the FBI can expand the scope of its inspections, or develop acceptable alternatives, to address coordination of foreign intelligence investigations where federal criminal violations are implicated. This, in turn, will depend on the extent to which the FBI case summary memorandums seeking FISA renewals, or whatever medium is subsequently used to accomplish that purpose, contains a separate section indicating possible federal criminal violations. Concerning our recommendation that the Attorney General establish a policy and guidance clarifying his expectations regarding the FBI’s notification of the Criminal Division and the types of advice the Division should be allowed to provide, DOJ, citing the sensitivity and difficulty of the issue, said that the Attorney General continues to review the possibility of amending the July 1995 coordination procedures. Our report recognizes the complexity of the issue and DOJ’s concerns about the uncertainties that any change in the procedures will create on how the courts may view such changes in their rulings. Nevertheless, as we pointed out, this issue has been longstanding and the concerns that it has generated by some officials has inhibited the achievement of one of the intended purposes of the procedures, that is, to ensure that DOJ’s criminal and counterintelligence functions were properly coordinated. Because such coordination can be critical to the successful achievement of both counterintelligence investigations and criminal prosecutions, the issue needs to be resolved as soon as possible. We remain concerned that delays in resolving these issues could have serious adverse effects on critical cases involving national security issues. Concerning our two remaining recommendations—(1) that all FBI memorandums sent to OIPR summarizing investigations seeking FISA renewals contain a section specifically devoted to identifying federal criminal violations and (2) that the Attorney General institutionalize the role of the Core Group--DOJ said that they were being reviewed, but offered no timeframe for their resolution. With respect to other points raised in Justice’s comments, we have incorporated in our report, where appropriate, the Department’s technical comments concerning our discussion of the primary purpose test and the courts’ views on it. Regarding the Department’s point that it is probably more accurate to divide the concept of coordination into an information- sharing component and an advice-giving component, we believe our report adequately differentiates between the two concepts and that we accurately report that the issue concerning the type of advice the Criminal Division can provide has been the primary stumbling block to better coordination. Thus, we made no change regarding this matter. Moreover, while the Department wrote that all relevant Department components agree that information sharing is usually appropriate for all felonies, we found and our report notes that the timing of the information sharing has been an issue. Furthermore, notifications tended to occur near the end of the investigation, with the Criminal Division playing little or no role in decisions that could effect the success of potential subsequent prosecutions. Even with the later procedural changes to coordination, the Criminal Division still had concerns about the timeliness issue. In this regard, the actions DOJ said it has taken in response to our report and our recommendation concerning FBI case summary memorandums, if implemented, should help improve coordination timeliness. As agreed with your office, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will provide copies of this report to the Chairman of the Committee on Governmental Affairs; the Chairmen and Ranking Minority Members of the Committee on the Judiciary and the Select Committee on Intelligence, United States Senate; the Chairmen and Ranking Minority Members of the Committee on Government Reform, the Committee on the Judiciary, and the Permanent Select Committee on Intelligence, House of Representatives; the Attorney General; the Acting Director of the Federal Bureau of Investigation; and the Director of the Office of Management and Budget. We will also make copies available to others on request. If you should have any questions about this report, please call Daniel C. Harris or me on (202) 512-8777. Key contributors to this report were Robert P. Glick, Barbara A. Stolz, Jose M. Pena III, and Geoffrey R. Hamilton. The following table shows key events relating to coordination of FBI foreign counterintelligence investigations with the Criminal Division. The following are GAO’s comments on the Department of Justice’s letter dated June 21, 2001. 1. See “Agency Comments and Our Evaluation” section. 2. DOJ suggested in its comments that we address the question of whether or not the 1995 coordination procedures were being applied correctly. As we noted in the scope and methodology section of this report, as agreed with the requester of the report, we did not review specific cases to try to identify instances of compliance or noncompliance with the coordination procedures. 3. DOJ also suggested in its comments that we address whether and how the coordination procedures ought to be changed. Given that since 1995, this issue has been studied by three high-level DOJ working groups and the Attorney General’s Review Team and because of the concerns expressed by some DOJ officials in our report, we believe that DOJ is in the best position to address any changes to its procedures. 4. The Department suggested that we emphasize to a greater extent throughout our report the sensitivity and complexity of the issues. In addition, it provided additional language for the report to reflect the issues’ sensitivity and complexity. We agree that the issues discussed are sensitive and complex, however, we believe the report adequately conveys these points and, thus, we did not revise our report to address the Department’s suggestion. 5. DOJ suggested a factual correction to recognize that two decision memorandums were submitted to the Attorney General; one in October 2000, and a second in December 2000. On pages 22 and 23 of our report, we discuss the submission of both memorandums. Concerning DOJ’s suggestion that we note the options that these memorandums presented, we did not adopt this suggestion as DOJ had opted not to provide us with the details of its options when we met to discuss the memorandums.
This report reviews the coordination efforts involved in foreign counterintelligence investigations where the Foreign Intelligence Surveillance Act has been or may be employed. The act established (1) requirements and a process for seeking electronic surveillance and physical search authority in national security investigations seeking foreign intelligence and counterintelligence information within the United States and (2) the Foreign Intelligence Surveillance Court, which has jurisdiction to hear applications for and grant orders approving Foreign Intelligence Surveillance Act surveillance and searches. GAO found that coordination between the Federal Bureau of Investigation (FBI) and the Department of Justice's (DOJ) Criminal Division has been limited in those foreign counterintelligence cases in which criminal activity is indicated and surveillance and searches have been, or may be, employed. A key factor inhibiting this coordination is the concern over how the Foreign Intelligence Surveillance Court or another federal court might rule on the primary purpose of the surveillance or search in light of such coordination. In addition, the FBI and the Criminal Division differ on the interpretations of DOJ's 1995 procedures concerning counterintelligence investigations. In January 2000, the Attorney General issued additional procedures to address these coordination concerns. These procedures, among other things, required the FBI to submit case summaries to the Criminal Division and established a protocol for briefing Criminal Division officials about those investigations. In addition, the FBI established two mechanisms to ensure compliance with the Attorney General's 1995 procedures. These mechanisms include (1) requiring the Office of Intelligence Policy and Review to notify the FBI and the Criminal Division of investigations it believes meets the requirements of the 1995 procedures and (2) establishing a core group of high-level officials to oversee coordination issues. However, these efforts have not been institutionalized in management directives or written administrative policies or procedures.
Our review of surveys and academic studies, and interviews with people with Social Security expertise, suggest that most individuals do not understand key details of Social Security rules that could potentially affect their retirement benefits or the benefits of their spouses and survivors. Specifically, many people approaching retirement age are unclear on how claiming age affects the amount of monthly benefits, how earnings (both before and after claiming) affect benefits, the availability of spousal benefits, and other factors that may influence their claiming decision. For example, while some people understand that delaying claiming leads to higher monthly benefits, many are unclear about the actual amount that benefits increase with claiming age. The surveys also showed that many people do not understand the implications of the retirement earnings test, under which SSA withholds benefits for some claimants earning above an annual income limit but which are, on average, paid back later with interest. Understanding these rules and other information, such as life expectancy and longevity risk, could be central to people making informed decisions about when to claim benefits. With an understanding of Social Security benefits, people would also be in a better position to balance other factors that influence when they claim benefits, including financial need, poor health, and psychological factors. SSA makes comprehensive information on key rules and other considerations related to claiming retirement benefits available through its website, publications, personalized benefits statements, and online calculators. The information provided includes many of the items identified from our literature review and expert interviews, including how claiming age affects monthly benefit amounts, how benefits are determined, details on spousal and survivor benefits, the retirement earnings test, information about life expectancy and longevity risk, and the taxation of benefits. In particular, SSA’s website provides access to information on available benefits, key program rules, and interactive calculators that can be used to get estimates on what future benefits would be. Additionally, the Social Security statement is the most widespread piece of communication that SSA provides to individuals about their future benefits. It is a 4- to 6-page summary of personalized information that includes an estimate of the individual’s future benefit payable at age 62, full retirement age (FRA), and age 70, as well as estimates for the individual’s current disability and survivor benefit amounts. In May 2012, SSA made the statement available electronically for those establishing an online account. Since September 2014, SSA has mailed printed statements to workers age 25, 30, 35, 40, 45, 50, 55, and 60 or older who have not created a personal online Social Security account. At age 60, SSA sends the statement annually. While important information is provided through SSA’s website and publications to help people make informed decisions about when to claim retirement benefits, our observation of 30 claims interviews in SSA field offices and of a demonstration of the online claims process found that some key information may not be consistently provided to potential claimants when they file. POMS states that claims specialists are to provide information, and avoid giving advice, to claimants. The POMS also specifies that when taking an application for Social Security benefits, the claims specialist is responsible for explaining the advantages and disadvantages of filing an application so that the individual can make an informed filing decision. The SSA protocol has claims specialists follow a screen-by-screen process of questions and prompts to collect basic information from claimants, but does not prompt questions or discussion of some key information. The following summarizes key information that was not consistently covered during the in-person claims process. We discuss additional areas of key information in our full report. How claiming age affects monthly benefits: POMS states that claimants filing for benefits should be advised that higher benefits may be payable if filing is delayed. It also states that claimants should, if applicable, be provided with at least three monthly benefit amounts at three different claiming ages—at the earliest possible month for claiming, at FRA, and age 70. In 18 of 26 in-person interviews we observed in which delaying claiming was a potential choice, the claims specialist mentioned that the claimant’s benefit amount would be higher if he or she delayed claiming. However, the remaining 8 did not discuss this option. Of the 18 interviews that mentioned delayed claiming, 13 claims specialists presented at least the three benefit amounts per POMS, while 5 did not. Surveys have shown that most individuals do not know how much monthly benefits can increase by waiting to claim, so offering benefit estimates at different ages is likely to provide information many claimants do not have. This information can influence the age at which they claim, and expert opinion and past GAO reports have found that delayed claiming can be an important strategy to consider for most retirees. In contrast, the online claims process includes screens that provide information on how claiming at different ages raises or lowers monthly benefits. How taking retroactive benefits affects monthly benefits: SSA allows for up to 6 months of retroactive benefits when a claimant is at least FRA or has a “protective filing date”—a documented date within the 6 months prior to a claims appointment when a claimant first contacted SSA about filing a retirement claim. In 10 of the 30 observed interviews, claims specialists offered the opportunity to claim up to 6 months of retroactive benefits as a lump sum. While retroactive benefits offer an attractive lump sum, taking it essentially means applying for benefits up to 6 months earlier, and results in a permanent reduction in the monthly benefit amount. POMS provides eligibility criteria for retroactive benefits. However, it does not instruct claims specialists to inform claimants that taking lump-sum retroactive benefits will result in permanently lower monthly benefits, compared to not taking retroactive benefits, a tradeoff claimants may not be aware of. The claims specialist explained this tradeoff in only 1 of the interviews we observed. In another interview, a claimant who initially said he wanted benefits to start later in the year changed his mind to start 6 months earlier after being offered a lump sum. In the online claims process, if a claimant has the option of starting benefits retroactively and chooses not to, the claimant is asked to provide a reason. This step runs the risk of making the claimant believe he is making an unusual decision, or a mistake, by choosing a later claiming date. How lifetime earnings affect monthly benefits: We observed only 8 interviews in which a claims specialist mentioned that benefits are based on 35 years of earnings and that working longer could potentially raise benefits by boosting average lifetime earnings. While POMS does not require claims specialists to explain how earnings affect benefit amounts, the claims process could be modified to include prompts for claims specialists to inform claimants that benefits are based on 35 years of earnings—information that SSA already makes available on its website. By discussing how years of earnings are calculated to determine one’s benefit amounts, claims specialists might better inform claimants who are deciding when to claim, especially for those who have fewer than 35 years of earnings. Similarly, the online application process does not inform claimants that benefits are based on the highest 35 years of earnings. How the retirement earnings test affects income before and after FRA: Individuals who claim benefits before their FRA but continue to work for pay face a retirement earnings test, with earnings above a certain limit resulting in a temporary reduction of monthly benefits. In the 18 interviews we observed in which a potential claimant was younger than FRA, most of the claims specialists explained, accurately, that the claimant would have benefits withheld if he or she earned more than the retirement earnings limit. However, in fewer than half of applicable interviews (7 of 17) did the claims specialists explain that any benefits withheld due to earnings would be recalculated and result in higher benefit amounts after FRA. Some claims specialists mentioned only that earnings may result in lower benefits, or that the claimant cannot earn above the limit, perhaps inaccurately suggesting the earnings test would result in a permanent loss of benefits. In one interview, a claims specialist told the claimant she would be “penalized” if she earned over the limit. POMS states that, when applicable, the claims specialist should explain to claimants that earnings could be withheld based on the annual earnings test, but does not instruct claims specialists to explain that the earnings test is not a penalty or tax, or that withheld benefits are repaid. However, if claimants do not understand the full implications of the earnings test, they could erroneously think it will result in a permanent loss in benefits and, as a result, unnecessarily stop working or reduce their working income. This was made clear in one interview in which a claimant with earnings likely to be above the limit said she might have to quit one of her two jobs unless she waited until FRA to claim. In the online application process, screens provide information explaining that any benefits withheld because of the retirement earnings test will raise monthly benefits after FRA. How life expectancy and longevity risk could factor into the claiming decision: While claims specialists are not specifically required to discuss life expectancy and longevity risk, the POMS does state that information should be provided to help claimants make informed filing decisions. SSA also emphasizes the importance of considering longevity and life expectancy in information made available on its website. According to the American Academy of Actuaries and the Society of Actuaries, understanding how longevity, and in particular longevity risk, can affect retirement planning is an important aspect of preparing for a well-funded retirement. However, the subject of how family health and longevity might influence the timing of benefit claims arose only twice in our 30 observations, and both times because the claimant raised the subject. Similarly, the online application process does not inform claimants that life expectancy and longevity risk are important considerations in deciding when to claim. Potentially misleading use of breakeven ages: Additionally, in contrast to providing potential claimants with key information to help inform their claiming decisions, the POMS instructs claims specialist not to provide a “breakeven age”—the age at which the cumulative higher monthly benefits starting later would equal the cumulative lower benefits from an earlier claiming date. Research shows that breakeven analysis can influence people to claim benefits earlier than they might otherwise. During our in-person observations, we saw 6 instances in which a claims specialist presented a breakeven age to help a claimant compare claiming benefits now or waiting to claim. In some interviews, claims specialists not only offered a breakeven year, they added their conclusion that the analysis showed that claiming earlier was preferable. One claims specialist showed the claimant that it would take 11 and 1/2 years to make up the difference for waiting to claim, and added that “according to the actuaries, that is a reasonable choice.” Another claims specialist said the breakeven analysis showed “it pays to file early.” Many American will rely heavily on Social Security for a substantial portion of their retirement income, so it is imperative that they have the necessary information to make informed claiming decisions. Though we found SSA’s claims process largely provides accurate information and avoids overt financial advice, certain key information is not provided or explained clearly during the claims process. POMS specifies that claims specialists should explain the advantages and disadvantages of filing for Social Security benefits to help people make informed filing decisions. However, because SSA is not fully operationalizing this guidance in the claims interviews, some claimants do not receive all the information that is critical to making informed claiming decisions. The claims process, either in person with a claims specialist or online, allows for SSA to add additional questions or prompts—potentially using language SSA already provides on its website and in publications. Updating this information would help each individual receive the information they need to make an optimal decision. In our issued report, we make several recommendations that would ensure potential claimants are consistently provided with key information during the claiming process to help them make informed decisions about when to claim Social Security retirement benefits. Specifically, we recommend that SSA take steps to ensure that: when applicable, claims specialists inform claimants that delaying claiming will result in permanently higher monthly benefit amounts, and at least offer to provide claimants their estimated benefits at their current age, at FRA (unless the claimant is already older than FRA), and age 70; claims specialists understand that they should avoid the use of breakeven analysis to compare benefits at different claiming ages; when applicable, claims specialists inform claimants that monthly benefit amounts are determined by the highest (indexed) 35 years of earnings, and that in some cases, additional work could increase benefits; when appropriate, claims specialists clearly explain the retirement earnings test and inform claimants that any benefits withheld because of earnings above the earnings limit will result in higher monthly benefits starting at FRA; claims specialists explain that lump sum retroactive benefits will result in a permanent reduction of monthly benefits. For the online claiming process, SSA should evaluate removing or revising the online question that asks claimants to provide a reason for not choosing retroactive benefits; and the claims process include basic information on how life expectancy and longevity risk may affect the decision to claim benefits. SSA generally agreed with our recommendations. Chairman Collins, Ranking Member McCaskill, and Members of the Committee, this concludes my prepared remarks. I would be happy to answer any questions that you may have at this time. For further information regarding this testimony, please contact Charles Jeszeck at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who make key contributions to this testimony include Mark Glickman (Assistant Director), Laurel Beedon, Susan Chin, Susan Aschoff, Alexander Galuten, Frank Todisco (Chief Actuary), and Walter Vance. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony summarizes the information contained in GAO's September 2016 report, entitled Social Security: Improvements to Claims Process Could Help People Make Better Informed Decisions about Retirement Benefits ( GAO-16-786 ). GAO's review of nine surveys and academic studies, and interviews with retirement experts, suggest that many individuals do not fully understand key details of Social Security rules that can potentially affect their retirement benefits. For example, while some people understand that delaying claiming leads to higher monthly benefits, many are unclear about the actual amount that benefits increase with claiming age. The studies and surveys also found widespread misunderstanding about whether spousal benefits are available, how monthly benefits are determined, and how the retirement earnings test works. Understanding these rules and other information, such as life expectancy and longevity risk, could be central to people making well-informed decisions about when to claim benefits. By having this understanding of retirement benefits, people would also be in a better position to balance other factors that influence when they should claim benefits, including financial need, poor health, and psychological factors. The Social Security Administration (SSA) makes comprehensive information on key rules and other considerations related to claiming retirement benefits available through its publications, website, personalized benefits statements, and online calculators. However, GAO observed 30 in-person claims at SSA field offices and found that claimants were not consistently provided key information that people may need to make well-informed decisions. For example, in 8 of 26 claims interviews in which the claimant could have received higher monthly benefits by waiting until a later age, the claims specialist did not discuss the advantages and disadvantages of delaying claiming. Further, only 7 of the 18 claimants for whom the retirement earnings test could potentially apply were given complete information about how the test worked. SSA's Program Operations Manual System (POMS) states that claims specialists should explain the advantages and disadvantages of filing an application so that the individual can make an informed filing decision. The problems we observed during the claims interviews occurred in part because the questions included in the claims process did not specifically cover some key information. Online applicants have more access to key information on the screen or through tabs and pop-up boxes as they complete an application. However, similar to in-person interviews, the online application process does not inform claimants that benefits are based on the highest 35 years of earnings or that life expectancy is an important consideration in deciding when to claim.
The credit rating industry is highly concentrated. As of April 2015, there were 10 registered NRSROs. In fiscal year 2013, the 3 largest NRSROs accounted for 95 percent of the year’s total reported NRSRO revenue and as of December 2014, they issued 96 percent of all outstanding ratings. These 3 firms also employed 88 percent of the approximately 4,500 credit rating analysts who work at NRSROs (see table 1). Other firms that do not describe themselves as rating agencies and firms that are not registered with SEC also conduct credit rating-type analyses. According to SEC, the agency does not collect data on this group of firms. A credit rating is an assessment of the creditworthiness of an obligor as an entity or with respect to specific securities or money market instruments. These assessments reflect a variety of quantitative and qualitative factors that vary based on sector and NRSRO. To determine an appropriate rating, analysts use publicly available information and market and economic data, and may hold discussions and obtain nonpublic information from the issuer. Commonly, analysts then present their proposed rating to a ratings committee in their agency, and the committee discusses and evaluates the data before voting on a decision. The decision of the ratings committee then is communicated to the issuer seeking the rating and a final rating is issued. Issuers seek credit ratings to improve the marketability or pricing of their securities or to satisfy investors, lenders, or counterparties. Institutional investors may use credit ratings as one of several inputs to internal credit assessments and investment analyses, or to facilitate the trading of securities. Broker-dealers also use ratings to recommend and sell securities to clients or determine acceptable counterparties and collateral levels for outstanding credit exposures. An NRSRO can be registered in one or more of five classes of credit ratings: (1) financial institutions, brokers, or dealers; (2) insurance companies; (3) corporate issuers; (4) issuers of asset-backed securities; and (5) issuers of government securities, municipal securities, or securities issued by a foreign government. Five NRSROs are registered with SEC to rate in five classes, while others are registered in fewer. Since the early 2000s, Congress has taken measures to reform the credit rating industry. In 2006, Congress passed the Credit Rating Agency Reform Act, to improve ratings quality by fostering accountability, transparency, and competition in the credit rating industry. The Credit Rating Agency Reform Act added section 15E to the Exchange Act, which established SEC oversight over NRSROs. Section 15E also requires SEC to conduct examinations at least annually of each NRSRO to monitor compliance with statutory and regulatory requirements. Each examination must include a review of areas including, among other things, whether the NRSRO conducts business in accordance with its policies, procedures, and methodologies, as well as the management of conflicts of interest by and internal supervisory controls of the NRSRO. The performance of the three largest NRSROs in rating subprime residential mortgage-backed securities (RMBS) and related securities renewed questions about the accuracy of their credit ratings generally, the integrity of the ratings process, and investor reliance on NRSRO ratings for investment decisions. Since the 2007–2009 financial crisis, the United States has put in place additional regulation of credit rating agencies. The Dodd-Frank Act required that SEC establish the Office of Credit Ratings (OCR) to administer SEC rules with respect to the practices NRSROs use in determining credit ratings. SEC established the office in June 2012; OCR has since assumed responsibility for oversight activities, such as the annual examinations of NRSROs to assess their compliance with SEC rules. The Dodd-Frank Act also required SEC to issue regulations imposing new requirements on NRSROs related to qualification standards for credit rating analysts. SEC issued new rules, which became effective in June 2015, containing the following requirements: Each NRSRO must establish, maintain, enforce, and document standards of training, experience, and competence for its credit analysts. In the required standards, each NRSRO must include a requirement for periodic testing of its credit analysts on their knowledge of the NRSRO’s procedures and methodologies used to determine credit ratings. The standards also must include a requirement that at least one individual with an “appropriate level of experience in performing credit analysis, but not less than 3 years” participates in the determination of a credit rating. These standards apply to all individuals who participate in the determination of credit ratings, including individuals at NRSROs and credit rating affiliates located in other countries. Before SEC issued the new rules, there were no regulatory requirements for training and testing of credit rating analysts in the United States. Regulatory requirements also exist internationally for credit rating agencies. For example, in 2009, the European Union (EU) established rules for credit rating agencies in the EU. These rules include basic requirements that all staff involved in the ratings process have appropriate knowledge and experience to complete their assigned duties, but do not specify training and competence standards for analysts or require testing. Similar to OCR in the United States, the European Commission established the European Securities and Markets Authority—the EU financial regulatory institution and European Supervisory Authority—in January 2011 to provide exclusive registration and supervision of credit rating agencies in the EU. In 2008, the leaders of the Group of 20 (G20) pledged to strengthen oversight of credit rating agencies. Specifically, as part of the G20 declaration at its November 2008 summit, regulators were charged with taking steps to ensure that credit rating agencies met IOSCO—an international standard-setting body for the securities sector—standards, avoided conflicts of interest, and had the right incentives and appropriate oversight to enable them to provide unbiased information and assessments. Subsequently, several countries’ regulators took additional steps to increase oversight of these agencies. For example, regulatory authorities in Hong Kong, Singapore, and Saudi Arabia established licensing regimes for credit rating analysts. Many credit rating agencies around the world voluntarily comply with IOSCO’s Code of Conduct Fundamentals for Credit Rating Agencies (Code). The Code includes standards about implementing and enforcing procedures for methodologies analysts must use, ethical practices they should follow, and conflicts of interest they should avoid. Additionally, a March 2015 revision includes a new provision that analysts undergo formal ongoing training at reasonably timed intervals, including education on the agency’s code of conduct, methodologies, and on policies and laws that govern the agency. Most NRSROs voluntarily comply with and adhere to a code of conduct that is consistent with some or all of the IOSCO Code. Generally, independent professional organizations are established to help protect the integrity of a specific profession and provide safeguards for stakeholders, such as investors, and the public. Professional organizations are membership-based. The specific responsibilities of professional organizations may evolve over time, but the primary responsibilities of these organizations generally include the following: educating members, which often includes promoting knowledge sharing among members and engaging in research related to the profession; developing standards to govern the profession (including codes of overseeing member compliance with the standards; registering or providing certification or examination for members in the profession, or both; and engaging in public education, outreach, or (in some cases) advocacy about the profession. Key elements of these organizations vary depending on the type and purpose of the organization, including membership requirements (mandatory versus voluntary and institutional versus individual), funding, and the role of government. In addition, professional organizations take various measures to safeguard their independence, including consideration of the composition of their governing board and their funding source. Views varied on the merits of a professional organization for credit rating analysts of nationally recognized statistical rating organizations, but most concluded it was too early to tell if one was needed. Professional organizations, including a professional organization for credit rating analysts, can improve the reputation of the industry where the professionals work, promote collaboration and sharing to enhance quality, and supplement existing oversight. However, all analysts with whom we spoke as well as representatives of NRSROs; a few experts and stakeholders (including academics, investors, advocacy groups, and international regulators); and SEC officials told us that such an organization for credit rating analysts could duplicate existing professional standards, codes of ethical conduct, and oversight of credit rating analysts. More importantly, in light of SEC’s new standards to help ensure the training, experience, and competence of credit rating analysts, some representatives and a few analysts and experts and stakeholders said that it was too early to determine what role a professional organization would play, or whether a professional organization was needed at all. Professional organizations can help to improve an industry’s reputation, enhance quality, and supplement existing regulatory oversight. Improved reputation. According to some professional organization representatives with whom we spoke, professional organizations can increase the public’s trust in an industry by developing shared professional standards and oversight processes to help ensure the quality of the work. For example, shared standards can help define a minimum baseline for skills that professionals need, promote ethical behaviors in the profession, and provide a process for identifying and removing bad actors. A few analysts and one NRSRO representative explained that a professional organization for credit rating analysts could improve the reputation of NRSROs and credit rating analysts. Enhanced quality. Professional organizations facilitate sharing and collaboration among professionals to enhance the quality of the work they perform. Specifically, a few representatives of professional organizations, analysts, and experts and stakeholders told us that one advantage of an organization would be the forum it could provide for professionals to network and share and develop best practices or standards. In addition, most NRSRO representatives and all analysts with whom we spoke told us that analysts usually learn the skills necessary to issue ratings from internal training programs and on-the- job experience. According to a few experts and stakeholders, an analyst, and one NRSRO representative, an organization could be beneficial for all credit rating analysts, regardless of employer, because they would receive similar information and training that could play a quality control function. Supplemented oversight. According to a few representatives of existing professional organizations and one NRSRO representative, a professional organization has the potential to enhance or supplement existing oversight. For example, a professional organization can fill in gaps in regulatory oversight by providing more detailed rules than regulatory agencies can provide. Representatives of one professional organization also could provide more robust oversight programs than regulatory agencies, and can sometimes carry out responsibilities faster. However, analysts, NRSRO representatives, and experts and stakeholders told us that these benefits would be unlikely to result from an organization for credit rating analysts. Specifically, all analysts and NRSRO representatives and a few experts said that creating a professional organization for credit rating analysts employed by NRSROs could duplicate existing structures and organizations rather than enhance or supplement those structures. Some NRSRO representatives and analysts explained that certain SEC rules apply to credit rating analysts, including rules prohibiting conflicts of interest, and SEC may oversee certain activities of credit rating analysts through its oversight of NRSROs—two of the key activities associated with a professional organization. A few NRSRO representatives added that SEC examines NRSROs annually to ensure adherence to SEC rules. For example, during the 2014 examinations, SEC staff reviewed each NRSRO’s ethics policy and procedures, as well as a sample of each NRSRO’s employee In certifications or monitoring activities concerning their code of ethics.addition, some NRSRO representatives and a few analysts told us the IOSCO Code includes professional standards applicable to credit rating analysts and 8 of the 10 NRSROs cite the IOSCO Code in their codes of ethical conduct. Finally, according to some analysts with whom we spoke, many analysts belong to professional organizations, such as AICPA, CFA Institute, or industry groups, that provide the services of a professional organization (education, development and oversight of professional standards, certification, and public outreach and advocacy).all analysts and representatives of NRSROs and a few experts and stakeholders told us that a professional organization for credit rating analysts might duplicate existing standards and oversight. In addition, other NRSRO representatives and some experts and stakeholders told us that it was unclear whether, or how, the organization would address some of the well-known and widely reported issues that have harmed the reputation of NRSROs. For example, representatives from one expert organization explained that ratings downgrades in 2008 and 2009 (during the financial crisis) hurt the reputation of rating agencies and that investigations and enforcement actions, such as Standard & Poor’s $1.375 billion settlement associated with their misrepresenting the true credit risks of securities they rated during 2004-2007, had not helped to restore investors’ confidence. and a few experts and stakeholders, said that concerns about the accuracy and quality of credit ratings were not due to incompetent or poorly trained analysts. They noted that in their opinion, other issues in the industry, such as concerns about industry concentration, are more They stated that a professional organization established to relevant. See Securities and Exchange Commission, Summary Report of Issues Identified in the Commission Staff’s Examinations of Select Credit Rating Agencies (Washington, D.C.: July 2008); and Senate Permanent Subcommittee on Investigations, Wall Street and the Financial Crisis: Anatomy of a Financial Collapse, 112th Cong., 1st sess. (Washington, D.C., Apr. 13, 2011). In February 2015, Standard & Poor’s and its parent company McGraw Hill Financial, Inc. entered into a $1.375 billion settlement with the Department of Justice, 19 states, and the District of Columbia for charges related to misrepresenting the true credit risk of securities they rated during 2004–2007. They also settled for $125 million with the California Public Employees Retirement System at the same time for similar charges. In addition, in January 2015 they settled for $58 million with SEC and $19 million with the New York Attorney General’s office and the Massachusetts Attorney General’s office for charges involving fraudulent misconduct in ratings, particularly of certain commercial mortgage-backed securities. develop and oversee standards for analysts may be unable to deal with these issues at this time. As a result, a few experts questioned how well a professional organization would improve the industry’s reputation or enhance the quality of ratings. Finally, some NRSRO representatives, a few experts and analysts, and SEC officials said that it was too early to determine whether a professional organization might add to or complement the quality or oversight of analysts’ work, particularly in light of SEC’s new requirements designed to enhance the standards of training, experience, and competence for credit rating analysts at NRSROs. As we stated earlier, the new rules, which became effective in June 2015, require NRSROs to ensure that analysts meet additional quality standards to produce accurate ratings and are periodically tested on their knowledge of the NRSRO’s credit rating process. Additionally, SEC’s new rules address certain policies and procedures with respect to the methodologies used to determine credit ratings, conflicts of interest with respect to sales and marketing considerations, and certain actions when a review conducted by an NRSRO determines that a conflict of interest relating to post- NRSRO employment influenced a credit rating. They explained that SEC’s new rules address some of the services that one can expect from a professional organization (such as training and certification), and therefore it seemed appropriate to first determine if there were specific gaps that a professional organization might be able to fill at a later date. Some NRSRO representatives with whom we spoke said that they have made changes to their training and testing requirements in response to these new requirements. Some NRSRO representatives and one expert commented that, because of these recent changes, SEC should first evaluate whether the actions taken by NRSROs meet the new requirements. In addition, SEC previously noted that regulations affecting financial markets should be reviewed and revised as necessary to ensure the regulations continue to fulfill SEC's mission. Retrospective analyses can help agencies evaluate how existing regulations work in practice. Moreover, retrospective analyses can provide SEC the opportunity to assess how well the new regulations have achieved their policy goals. According to most analysts and representatives of NRSROs, and some experts and stakeholders, creating and operating a professional organization for credit rating analysts might be challenging. For example, some noted that it might be difficult to clearly define the organization’s purpose at this time because of new SEC requirements, which became effective in June 2015, for NRSROs to put standards and testing requirements into place for analysts. Similarly, others said that a relatively small membership base could make obtaining adequate funding difficult and the organization could struggle to define meaningful standards because of the potentially limited applicability of common standards across NRSROs. In light of these challenges, some analysts and experts and a few representatives of NRSROs identified other ways to develop and oversee professional standards for credit rating analysts, such as enhancing SEC oversight, engaging a third party to credential all analysts on minimum standards, or creating an organization with a broader membership base. Creating and operating any professional organization requires identifying the organization’s purpose, funding, organizational structure, and core activities. Most analysts and representatives of NRSROs and some experts and stakeholders (including academics, investors, advocacy groups, and international regulators) told us that putting these components in place for an organization for NRSRO credit rating analysts could be challenging. Creating and operating any professional organization requires a clear mission, purpose, and identification of the value the organization would bring its members, their industry, and the public. According to some NRSRO representatives, experts, and a few analysts, defining a clear purpose and mission for an organization for credit rating analysts would be difficult at this time because of the new SEC regulations. According to some representatives of existing professional organizations, clearly defining the purpose of an organization helps ensure that it will be viewed as a credible actor—able to effectively influence the behavior of members and industry. They noted that most professional organizations arise out of a specific public policy need, problem, or other precipitating event that a core group of professionals wants to address. Understanding the specific motivation for creating the organization can help orient the organization and define its value to its members. However, some NRSRO representatives and experts told us that because SEC’s rules for NRSROs to implement standards for their credit rating analysts were newly effective, it would be difficult at this time to effectively identify potential gaps in the training, credentialing, or oversight of credit rating analysts that a new organization would be designed to address. As we noted earlier, effective June 2015, each NRSRO must establish, maintain, enforce, and document standards of the training, experience, and competence for its credit rating analysts. According to most NRSRO representatives, they are in the process of making changes to their training and testing activities in response to these rules, including introducing new mandatory training programs, developing formal periodic testing regimes, and expanding support for analysts pursuing credentials from existing professional organizations. As a result, some of them questioned the role and potential value-added of creating a professional organization at this time. Professional organizations require funding to cover operational expenses and provide member services, such as the development and oversight of professional standards. Some representatives of NRSROs and existing professional organizations, experts, and analysts with whom we spoke stated that the relatively small population of credit rating analysts—as of December 2014, NRSROs employed approximately 4,500 analysts— suggests it might be challenging to obtain adequate funding to create and operate the organization, at least initially. Generally, member dues partially fund organizations. As shown in figure 2, of the six existing professional organizations we reviewed, in 2014, membership dues provided from 3 to 60 percent of revenue, with annual dues ranging from Organizations also receive revenue from $125 to $425 for members.other sources, including fees for services, examinations, conferences, education programs, and publications; fines or other disciplinary actions; and assessments on certain transactions, such as the assessment collected by MSRB on the trading and underwriting activities of municipal securities dealers. With regard to revenue from examinations and education programs, however, a few representatives of existing professional organizations noted that it might take several years for a new organization, such as one created for credit rating analysts, to receive sufficient revenue from these activities. One source of funding for an organization representing credit rating analysts could be from NRSROs themselves, but the views of NRSRO representatives and experts were mixed on the potential advantages and disadvantages of such an approach. A few representatives of NRSROs and some experts stated that an organization funded by NRSROs would be vulnerable to having its core activities, including the development and oversight of professional standards, influenced by NRSROs. They noted that this influence risked undercutting the organization’s independence from NRSROs and weakening its ability to act as a safeguard for investor and public interests—one of the noted responsibilities of a professional organization. In contrast, a few representatives of NRSROs and experts stated that NRSRO involvement in the organization would be an advantage because it would help ensure that the organization would have NRSRO support and acceptance to facilitate incorporation of the organization’s standards into analysts’ work. One expert noted that the NRSRO, as the employer, could be best placed to confirm that its credit rating analyst employees adhered to the organization’s standards. As an alternative to receiving direct funding from NRSROs, the organization could be partly funded through transactions fees paid to NRSROs. For example, FINRA and MSRB both receive funds directly from member Of the six organizations we reviewed, firms for certain transactions.MSRB is closest in size—it has approximately 3,300 members—to what would be the likely size of an organization representing credit rating analysts. In 2014, membership fees accounted for only 4 percent of its total revenue, whereas underwriting and trading fees accounted for more than 80 percent of revenue. The MSRB’s reliance on additional funding sources, including transaction fees, may serve as a model for funding a like-size organization. To identify the issues to consider in developing the organizational structure for a professional organization, we identified three models that could provide the basis for the creation and operation of a professional organization for credit rating analysts. For additional discussion of our methodology, see appendix I, and for additional discussion of the organizations we considered in developing our models, see appendix II. number of credit rating analysts employed by each NRSRO, could create challenges for developing an organizational structure that helps ensure equitable representation of all members, in particular those members employed by smaller NRSROs. For instance, consideration should be given to whether membership is mandatory or voluntary and whether the organization’s authority comes from the government or the public and industry (for additional discussion of various organizational structures, see app. II). In addition, successful professional organizations require personnel—whether paid staff, contractors, or volunteers—with experience leading and managing professional organizations and the technical knowledge and leadership to help ensure the organization’s relevance. As we noted earlier, the three largest NRSROs employ 88 percent of credit rating analysts. Some NRSRO representatives, analysts, and experts commented that this might require an organization to take additional steps to ensure it considers the experience and needs of members from both larger and smaller NRSROs in determining its organizational structure and personnel decisions. According to most representatives of existing professional organizations, a number of steps can be taken to safeguard an organization’s independence from dominance by certain elements in an industry. For example, an organization can help balance members’ interests through the composition of its board. Several existing professional organizations have nonindustry representation on the board and MSRB officials noted the Dodd-Frank Act modified MSRB’s board structure to require the majority of the board members be independent public representatives.representative of one professional organization explained that seeking stakeholder input or allowing for public comment before announcing new policies or rules also could help ensure that a variety of views and experiences were taken into account. Another representative of a professional organization noted that the organization requires professionals who volunteer to participate in the organization’s activities— such as the development or oversight of standards or the development of educational and testing programs—to attest that they will engage with the professional organization independently of their engagement with their employer. Creating and operating any professional organization requires developing core activities and services, such as professional standards, education and training curricula, certification tests, and structures to oversee member compliance. According to most analysts and NRSRO representatives, the proprietary methodologies used to conduct ratings and the specialization of credit rating analysts’ knowledge might make developing a meaningful credential or set of professional standards challenging. According to representatives from existing professional organizations, developing these services is a time- and labor-intensive process that requires extensive stakeholder involvement and buy-in. However, most NRSRO representatives and some analysts with whom we spoke noted that opportunities to share information about credit rating activities to develop those standards could be limited due to concerns about safeguarding confidential information and proprietary ratings methodologies. In addition, they explained that conducting ratings is highly technical, varies greatly depending on the specific asset class and the types of ratings being provided in that class, and relies on the individual NRSRO’s methodology. For example, the Department of the Treasury (Treasury) recently led an exercise that found the ratings results diverged widely among six credit rating agencies that conducted hypothetical ratings of identical pools of mortgage loans. A few NRSRO representatives noted that they had individually identified core knowledge, including information on financial market operations that their analysts needed to know. This core knowledge could provide the basis for an examination or credentialing process or the development of professional standards, but the extent to which it would promote higher quality ratings was unclear. For example, most credit rating analysts with whom we spoke noted that this core knowledge was a valuable foundation for conducting ratings, but producing ratings also required specialized knowledge and experience. As a result, some NRSRO representatives and analysts stated that a relatively limited set of standardized knowledge or activities was applicable across all NRSROs or asset classes. According to representatives of existing professional organizations, one approach to bridging differences among actors in a profession is to develop principle-based standards, rather than rules. Principle-based standards provide a broad framework on the types of behavior and approaches to certain activities expected of members. For example, the AICPA Code of Professional Conduct identifies specific principles to guide certified public accountants’ activities, including the responsibility to conduct work with integrity and in the public interest. Under a principle- based approach, individual professionals are expected to apply the principles to assess the unique factors and circumstances that arise during their work. However, these standards may stop short of providing specific details on acceptable actions or activities. Representatives of existing professional organizations noted that the types of standards used by an organization evolve over time and that an organization’s standards can be a combination of principles and rules. In light of the challenges and other concerns with creating a professional organization for analysts, which we discussed earlier, some experts and analysts with whom we met and a few NRSRO representatives identified alternative approaches that could be used to establish and oversee standards and a code of conduct, as the following examples illustrate. Expand SEC oversight. SEC could expand its oversight of analysts by defining minimum experience or knowledge requirements for analysts or requiring all analysts to pass an examination before conducting ratings. One expert and SEC officials noted that SEC might require additional resources to implement any additional activities or requirements. Engage a third party. A third party (such as an existing professional organization, a private business, or a taskforce of regulatory and other experts) could be engaged to develop and communicate professional standards for credit rating analysts and oversee application of the standards. Some experts noted this approach might be more efficient than creating a new organization because it could leverage existing resources and mitigate concerns about larger NRSROs dominating an organization. SEC considered public comments regarding this type of approach after proposing its new rule for the training and testing of credit rating analysts, but did not adopt this approach because it did not allow NRSROs sufficient flexibility to design standards tailored to their business model, size, and methodologies. Broaden the potential member base. According to some analysts, an organization with a broader membership base (all credit analysts, not just those that determine credit ratings) would have a larger potential membership base and could provide services, such as a credential, which were valued outside of the credit rating profession. Representatives of one NRSRO noted that this might be developed within an existing organization, such as CFA Institute. However, as we noted earlier, a few analysts and some NRSRO representatives and experts did not see a need for a professional organization and some analysts and NRSRO representatives and one expert did not identify any alternative structures to establish or oversee standards and a code of conduct for credit rating analysts. We provided a draft of this report to SEC for review and comment. In addition, we provided a draft of the organizational profiles and assessments to the existing professional organizations for review. SEC staff as well as representatives from each organization provided technical comments, which we have included, as appropriate. We are sending copies of this report to the appropriate congressional committees and members, SEC, and other interested parties. This report will also be available at no charge on our website at http://www.gao.gov. Should you or your staff have questions concerning this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Section 939E of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) required us to study the feasibility and merits of creating an independent professional organization for rating analysts employed by nationally recognized statistical rating organizations (NRSRO) that would be responsible for establishing independent standards for governing the profession of rating analysts, establishing a code of ethical conduct, and overseeing the profession of rating analysts. This report describes views on (1) the potential merits of and need for a professional organization for credit rating analysts; and (2) identified components of and challenges associated with creating and operating such an organization, and possible alternatives for establishing and overseeing professional standards and a code of ethical conduct. To address both objectives, we conducted a literature review. We used Internet search techniques and keyword search terms to identify available information about the credit rating industry, including the definition of a credit rating; number and nature of companies that serve as credit rating agencies, or NRSROs; and relevant laws and regulations, including the Securities and Exchange Commission’s (SEC) recent rules requiring NRSROs to establish standards of training, experience, and competence for rating analysts. We also identified sources describing the founding, development, and assessment of professional organizations (or associations) and professional designations. From research databases such as ProQuest and LexisNexis, we obtained information from publicly available documents, such as financial reports, journals, trade publications, periodicals, studies, white papers, and aggregated databases relevant to professional organizations and the credit rating industry. To address the first objective, we collected and analyzed information on existing organizations that develop and oversee professional standards and codes of conduct, including the American Institute of Certified Public Accountants (AICPA); American Society of Association Executives (ASAE); CFA Institute; Financial Industry Regulatory Authority (FINRA); Institute of Internal Auditors (IIA); Municipal Securities Rulemaking Board (MSRB); and the Public Company Accounting Oversight Board (PCAOB). We interviewed representatives of these organizations to obtain information on the merits of creating a professional organization, in general, as well as the potential merits of creating an independent professional organization for credit rating analysts. We also obtained available information on the advantages and disadvantages of a professional organization, why such organizations are generally formed (the perceived need, pertaining to the specific industry, profession, or the public), as well as the need for professional standards and codes of conduct for members. We obtained information from transcripts of SEC’s roundtable on credit ratings held on May, 14, 2013, in which participants, including NRSROs, international regulators, academics, and other industry experts, discussed ethics training and possible certification requirements for credit rating analysts. We also obtained information from comments submitted in response to SEC’s 2011 proposed rules to enhance oversight of NRSROs. We obtained and analyzed information on existing NRSRO professional standards and codes of conduct, including information on each NRSRO’s Code of Conduct/Ethics submitted to SEC as part of their annual certification on “Form NRSRO” for calendar year 2014 (the most recent filings to date). We also analyzed the International Organization of Securities Commissions (IOSCO) Code of Conduct Fundamentals for Credit Rating Agencies as well as SEC’s new rule requiring NRSROs to establish standards for training, experience, and competence of credit rating analysts. Additionally, we reviewed the 2014 Summary Report of Commission Staff’s Examinations of Each Nationally Recognized Statistical Rating Organization and the 2014 Annual Report to Congress on Nationally Recognized Statistical Rating Organizations for reported information on NRSRO professional standards and codes of conduct. To address the second objective, we held semi-structured interviews with representatives of the professional organizations we identified, and reviewed their websites and publicly available documents. We obtained and analyzed information on the operational structure and approaches used to provide services to members including the organizations’ membership requirements, funding channel, government role, and source of authority. We also obtained information on the resources and actions needed to create and operate a professional organization for credit rating analysts and possible organizations that could serve as models for such an organization. We used the information we obtained through interviews and the results of our literature review to develop a working definition of an independent professional organization and identify three potential models of a professional organization for credit rating analysts (see fig. 3, app. II). We defined an independent professional organization as a membership-based organization that is established to help protect the integrity of the profession, provide safeguards for investors and the general public, and take steps to safeguard independence through the composition of a governing board. To ensure the validity of our results, we provided our definition and models to the existing organizations for comment and adjusted our results based on their input. We also obtained information on other structures or instruments (regulatory or otherwise) that could be used to establish standards and a code of ethical conduct for governing and overseeing the profession. For example, we obtained and analyzed information on international regulatory organizations in four jurisdictions (Hong Kong, Saudi Arabia, Singapore, and Turkey) identified by SEC as having established various structures for oversight, such as direct registration and licensure for credit rating analysts and related professional standards and codes of conduct for the profession. To gather a diverse set of perspectives, we interviewed SEC officials; NRSRO representatives; industry experts and stakeholders—including investors, academics, representatives of credit rating and analysis firms not registered with SEC, representatives of the Software and Information Industry Association, representatives of a research institute, and officials with the European Securities and Markets Authority and IOSCO—and leadership or governing bodies of existing professional organizations.We obtained their views about (1) the advantages and disadvantages (for regulators, credit rating agencies, rating analysts, investors, and the public) of creating a professional organization for credit rating analysts, and (2) how a professional organization would fit within the existing structure of regulation and oversight of credit rating agencies and credit rating analysts. We also obtained their views on the nature and common characteristics of professional organizations, including organizational structure, relationship to members, and regulatory body (such as SEC); professional certifications for members; resources and actions needed to create and operate a professional organization for rating analysts; organizations that could serve as an appropriate model in evaluating the feasibility and merits of a professional organization for rating analysts; and how and to what extent professional organizations develop and oversee professional standards and a code of ethical conduct for members. We also obtained their views on the potential challenges involved in creating and operating an organization, such as obtaining funding, attracting and retaining membership, developing professional standards and a code of ethical conduct, and overseeing rating analysts. We obtained their views on the need for an independent professional organization for credit rating analysts; the circumstances under which analysts and other stakeholders would use the services of such an organization; and how, or whether, an independent professional organization for credit rating analysts would address some of the current challenges facing the credit rating industry. We used a modified Delphi approach to identify and confirm the most important issues for each objective. In doing so, we conducted an initial round of interviews with officials in SEC’s Office of Credit Ratings, representatives of select NRSROs and professional organizations, and select industry experts to discuss the information collected through our literature search. In our second round of interviews, we held semi- structured interviews with senior management from all NRSROs, and select industry experts and stakeholders to obtain a depth of understanding and a variety of perspectives, as well as to corroborate the information obtained in the research and initial interviews. The criteria for selecting these interviewees consisted of factors such as participation in prior SEC events, including roundtables; recommendations from GAO stakeholders, industry experts, and other external stakeholders; participation in prior congressional hearings or industry conferences; appearance in our literature reviews and Internet searches; and bibliographies of relevant papers and studies where they were mentioned. Finally, to ensure that we obtained the perspectives of credit rating analysts in particular, we conducted a series of 11 focus groups with approximately 100 credit rating analysts, at 6 of the 10 NRSROs. We judgmentally selected 6 of the 10 NRSROs based on the size of the firm (number of ratings performed and number of analyst staff) to ensure a mix of large, medium, and small firms were represented. At five of these firms, we held two separate focus groups—one with analysts and one with supervisory analysts (analyst staff with supervisory responsibilities). We contacted NRSROs to request data to identify and select a judgmental sample of analysts. To promote widespread NRSRO participation, we generally provided three options to NRSROs for identifying analysts for selection to participate in our focus group discussions: (1) provide a complete list of all analysts; (2) provide names of 25 analysts and 25 supervisory analysts; (3) identify and self-select approximately 8-12 analysts. Additionally, we requested that the NRSROs provide information on analysts’ years of service with the organization, credit rating specialization, geographic location. We tailored our requests to account for the number of analyst staff. For example, for the smaller NRSROs (60 or fewer analyst staff), we requested that they provide a complete list of all analysts. After receiving the information from NRSROs, we selected from 8 to 15 analysts to participate in the focus groups. (We allowed at least two analysts to serve as alternates, if necessary.) We selected credit rating analysts to obtain a range of years of experience, job title, and class of ratings for which they participated in credit rating determinations. We grouped each list by class of ratings and ordered the groups by years of experience. We verified the total number of analysts and supervisory analyst names provided, divided the total number by 15 to come up with a selection number (the “nth” number), and selected every nth name to participate in the focus groups. In addition, to ensure that every asset class represented had a participant in the focus group, where necessary, we selected alternates with similar titles, years of service, and credit rating specialization. We also selected fewer than 8-15 analysts at the smaller NRSROs. For example, one In one case, we slightly NRSRO had less than 10 supervisory analysts.adjusted our methodology to accommodate one NRSRO management’s request that they be able to select alternate analysts to participate in the focus groups, if alternates were needed. The management noted that they needed some flexibility in the event that any analysts we selected were unable to participate due to vacation, training, or other scheduling conflicts. In facilitating each focus group, we focused our discussions on the following topics: current training received and any certifications held by credit rating analysts; extent to which credit rating analysts currently have access to the services typically offered by a professional organization, such as education, professional standards and a code of ethical conduct for governing the profession, professional certification or license, oversight, and public education, outreach, or advocacy; extent to which there is a need for a professional organization for credit rating analysts and any potential challenges in operating an organization for analysts; potential advantages and disadvantages of creating a professional organization for credit rating analysts based on the three models we developed; and feasibility of developing and overseeing professional standards for credit rating analysts. Based on records of these discussions, we analyzed the content to define overall themes and develop our findings. We also used the results from the focus groups to corroborate information we obtained in our interviews on the merits and need for an independent professional organization for analysts; potential requirements for creating and operating a professional organization; associated challenges; and potential alternatives to creating a professional organization. These results led to our synopsis of the feasibility and merits of a professional organization for credit rating analysts. Throughout this report, we use certain qualifiers when describing results from focus groups and interview participants, such as “few,” “some,” and “most.” We define few as a small number such as two or three; some as at least four or more; and most as the majority or nearly all. While the information we collected from the focus groups provided context on the issues discussed, it was not generalizable to the entire population of credit rating analysts. We conducted this performance audit from October 2014 through July 2015, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As part of our analysis for this report, we identified three models for professional organizations, profiled existing organizations, and reviewed how they delivered services (see app. I). In the following sections, we discuss three models for professional organizations and how membership, funding, regulatory role, and organizational authority is structured, operated, or obtained in each model; profile six professional organizations, which are the State Bar of California, Financial Industry Regulatory Authority (FINRA), Municipal Securities Rulemaking Board (MSRB), American Institute of Certified Public Accountants (AICPA), CFA Institute, and Institute of Internal Auditors (IIA); and discuss how the six organizations deliver education and other services to members. The models we identified vary in key aspects—whether membership is mandatory or voluntary, individual members or employers register and pay dues, government has a direct oversight role or not, and what the sources of authority or legitimacy are for the organization (see fig.3). Mandatory or voluntary membership. Under models I and II, all members of a profession must join in order to practice that profession. If a professional organization for credit rating analysts had mandatory membership, all credit rating analysts employed by NRSROs would be required to join. In model III, membership is voluntary and professionals who elect to join agree to meet any membership requirements, such as specific education or work experience. If a professional organization for credit rating analysts were voluntary, all credit rating analysts (including those analysts not employed by NRSROs) could join if they met the requirements. Individual or employer registration and funding of membership. Models I and III rely on individual members to register directly with the organization and pay dues. In model II, employers register the members and pay dues. For example, FINRA and MSRB use employer-based membership systems. Government role and source of authority. In models I and II, a regulatory or judicial body has a direct role in some aspects of the organization’s operations (for example, approving rules or overseeing supervisory actions). In these models, the organization receives its authority directly from government. In model III, government has no direct role, instead. The organization’s authority derives from public or industry recognition of the legitimacy of the organization’s activities. These profiles of the six organizations we reviewed further illustrate how the organizations function in practice and can provide insight about membership structures and requirements, funding sources, and any government oversight role. We obtained the information from publicly available documents (including financial reports and the organizations’ websites) and, where noted, from interviews with representatives of the organizations. We have not independently verified the information. The State Bar of California is charged with regulating the legal profession, formulating and elevating education and professional standards, raising the quality of legal services, advancing the science of jurisprudence, and aiding in the improvement of the administration of justice. Membership. Mandatory—the State Bar of California licenses attorneys to practice law in the State of California. As of April 2015, the State Bar reported it had approximately 253,000 licensees, of which more than 183,000 were active. Registration. To receive a license, individual attorneys must pass the California Bar Examination and pay their annual membership fees to the state bar. The 2015 membership fee was $430 for active members. Funding. The State Bar had $138 million in operating revenue in 2014, of which 60 percent came from membership fees and donations. Examination application fees, grants, and seminars provided additional funding. Government role. The State Bar is a public corporation within the judicial branch. The primary purpose of the State Bar is to serve as the administrative adjunct to the California Supreme Court in all matters pertaining to the admission, discipline, and regulation of California lawyers. Source of authority. The State Bar was created by the state legislature in 1927. FINRA is responsible for writing and enforcing rules governing the activities of broker-dealer firms and their registered individuals, including examining firms for compliance with those rules, fostering market transparency, and educating investors. Membership. Mandatory—all professionals associated with a broker- dealer (registered individuals) must register with FINRA.2015, FINRA reported more than 637,000 registered individuals. Registration. The broker-dealer firm registers the individual employee with FINRA and pays a $100 initial registration fee. The individual applicant applying for registration also must pass a qualifying test to ensure a minimum level of understanding of and expertise on financial markets, the securities industry, and regulatory structure. Firms also pay an annual fee of $175-$195 for each member. Funding. FINRA had total revenue in fiscal year 2014 of $970 million. Of this amount, 15 percent came from fees paid by broker-dealers based on the number of registered individuals employed by firms. The principal sources of revenue were regulatory activities and assessments, fines, contract services, and dispute resolution. Government role. The Securities and Exchange Commission (SEC) reviews and approves all of FINRA’s proposed rules and monitors and inspects FINRA’s regulatory activities. Source of authority. FINRA is a self-regulatory organization whose function as a national securities association was authorized under the Securities Exchange Act and approved by SEC in July 2007. MSRB is responsible for writing the rules that regulate the broker-dealers and municipal advisory firms that underwrite, sell, and trade municipal securities and provide municipal advisory services—with the goals of protecting investors and issuers and promoting a fair and efficient marketplace. MSRB also provides guidance to FINRA, SEC, and bank regulators in the oversight of compliance with MSRB rules. Membership. Mandatory—all professionals associated with municipal advisory firms must be registered with SEC and meet MSRB’s qualification requirements before engaging in any transaction. According to the MSRB, there were approximately 3,300 associated professionals in fiscal year 2014. Registration. Each municipal advisory firm pays a $300 annual membership fee for each associated professional and all the firm’s professionals must pass a qualification examination. Funding. MSRB had total revenue of $32 million in fiscal year 2014, less than 5 percent of which came from fees that municipal advisory firms paid for each associated professional. Principal sources of revenue include fees on transactions, underwriting, and technology. Government role. MSRB is required by statute to conduct rule- making in certain areas and SEC reviews and provides final approval of MSRB rules. Source of authority. MSRB is a congressionally chartered, self- regulatory organization subject to oversight by SEC. Its rules have the force and effect of federal law. AICPA represents the certified public accountant profession in relation to rule-making and standard-setting. It serves as an advocate before legislative bodies, public interest groups, and professional organizations; sets ethical standards for the profession; sets U.S. auditing standards for private companies, nonprofit organizations, and federal, state, and local governments; develops and grades the Uniform Certified Public Accountant Examination; and monitors and enforces compliance with technical and ethical standards. Membership. Voluntary—individuals who meet AICPA’s qualifications can apply for membership. In 2014, AICPA reported it had approximately 365,000 voting members. Registration. As a voting member, the applicant must pass the Uniform Certified Public Accountant Examination; obtain 150 semester hours of education at an accredited college or university; agree to abide by AICPA’s bylaws and code of professional conduct; obtain a valid and unrevoked certified public accountant certificate and, for applicants in public practice, be enroll in the AICPA Peer Review Program. In addition, members must pay annual dues. The regular membership fee for 2014-2015 ranged from $235 to $425, depending on the individual’s role and industry. Funding. AICPA had operating revenue of $235 million in fiscal year 2014. Membership dues generated approximately 53 percent of revenue. Conferences, publications, examination activities, and other sources provided additional revenue. Government role. According to a representative from AICPA, federal and state officials have no direct role in the organization, but participate in certain AICPA committees.disciplined members to state boards of accountancy for follow-up. In addition, AICPA refers Source of authority. The organization was founded in 1887. According to an AICPA representative, the organization derives its authority from transparent processes that engage many stakeholders (including members, government, and industry) to help develop professional standards and programs. The CFA Institute promotes ethical standards, education, and professional excellence for investment professionals (including financial analysts, investment managers, and securities analysts). It develops and administers examinations, encourages research and other educational programs in investment decision making, develops and enforces standards of professional conduct and a code of ethics, and raises public awareness of the principles and practices of investment decision making. Membership. Voluntary—individuals who meet the Institute’s qualifications can apply for membership. In 2014, CFA Institute reported it had approximately 120,000 regular members. Registration. Regular members must hold a bachelor’s degree from an accredited institution or have equivalent education or work experience, pass Level I of the CFA examination, have 48 months of appropriate professional work experience in investment decision making, provide professional references, agree to the Member’s Agreement and Professional Conduct Statement, and pay annual dues. The dues were $275 in 2015 for regular members. Funding. CFA Institute had $241 million in operating revenue in fiscal year 2014. Approximately 14 percent of revenue was generated primarily from membership dues, but also included advertising. Certification programs and conferences were the principle sources of revenue. Government role. According to CFA Institute representatives, regulators have no direct role in the organization. Source of authority. Financial analysts founded the Institute’s predecessor organization in 1947. According to CFA Institute representatives, the organization derives its authority from the recognition of the value of its education and certificate programs. IIA is the professional organization for internal auditors and provides professional education and development opportunities to its members, standards of practice, research on internal auditing, and advocacy and education about internal audit professionals and best practices in internal auditing. Membership. Voluntary—individuals who meet IIA’s qualifications can apply for membership. As of May 2015, IIA reported it had approximately 180,000 members, of which roughly 66,000 were based in the United States. Registration. Applicants agree to apply and uphold the code of ethics and applicable International Standards for the Professional Practice of Internal Auditing. Individual members pay annual dues that range from $125 to $265, depending on the member’s employer. Funding. In 2014, IIA and its related entities reported total revenue of approximately $50 million. IIA officials reported that membership dues generated approximately 27 percent of revenue. The certification process, seminars, and conferences generated additional funds. Government role. Government officials have no direct role in the organization. Source of authority. According to IIA representatives, the organization derives its authority from its membership’s oversight of the development and execution of internal processes to help ensure the quality and integrity of its activities and products, including its professional standards and code of conduct. The specific responsibilities of professional organizations may evolve over time, but their primary responsibilities generally encompass five areas: 1. educating members, which often includes promoting knowledge sharing among members and engaging in research related to the profession; 2. developing standards to govern the profession (including codes of 3. overseeing member compliance with the standards; 4. registering or providing certification or examination for members in the profession, or both; and 5. engaging in public education, outreach, or (in some cases) advocacy about the profession. According to representatives of the six organizations we reviewed, members’ priorities and needs, available funding, and other mandates help determine what services organizations provide and how they deliver the services. Education. The organizations offered a range of educational programs. All six offered training or educational opportunities through conferences, online or web-based courses, events, or lectures. For example, MSRB holds periodic education and training events to provide information on market topics, regulatory and compliance issues, and use of MSRB’s systems. Other organizations, such as CFA Institute, develop and administer a curriculum for their examination processes. Finally, as of May 2015, three organizations (AICPA, FINRA, and the State Bar of California) required members to receive a certain number of periodic continuing education hours to maintain membership. Standard development. All the organizations had or were establishing professional standards, in particular rules or requirements for ethical conduct. According to representatives of professional organizations, the organizations need to consider the extent to which they utilize principle- based standards—broad guidance about the behavior expected of members—rather than rule-based guidance that details specific requirements or prohibitions. According to one representative, the extent of the organization’s resources and information about members’ activities helps determine the type of standards it might adopt. For instance, members tend to ask for more detailed rules and guidance if they must undergo rigorous enforcement or inspections for their certifications or jobs. Principle-based standards may be more appropriate when organizations have limited resources and access to information on members. Finally, representatives of one organization noted that organizations also need to educate members about standards to help members understand the standards and implement them in challenging situations. Overseeing standards. Approaches for overseeing member compliance with professional standards varied depending on the authority provided to organizations. Organizations with mandatory membership that derived their authority from regulators or legislation tended to have robust examination and inspection processes. For instance, the State Bar of California reviews formal allegations of misconduct and determines any disciplinary actions. FINRA and MSRB engage in formal supervisory review of members.rely on voluntary compliance with the standards, such as periodic reaffirmation of compliance and voluntary reporting. Two voluntary organizations (AICPA and CFA Institute) have an additional investigative process that can be used when the organization receives specific complaints or regulators issue disciplinary actions against members. Organizations with voluntary membership tend to According to representatives of some voluntary organizations, staff or members can investigate alleged violations of standards and may recommend disciplinary actions or refer the case to government authorities. One representative of an organization noted that voluntary organizations could have trouble accessing information for investigations that members or their employers considered proprietary or confidential. Registering or certifying members. All six organizations have certain requirements for members—based on years or type of education or professional experience, agreement to adhere to standards or rules of professional conduct, or payment of annual dues. In addition, all the organizations offer a required or voluntary examination. For example, CFA Institute, FINRA, MSRB, and the State Bar of California require members to pass an examination to become a member. Other professional organizations, including IIA, offer members the opportunity to pass an examination to receive a voluntary credential. One organization, AICPA, did not register public accountants or certify them in the profession; however, it develops and administers the Uniform Certified Public Accountant Examination that most states require for licensing. Public education, outreach, or advocacy. The organizations have used differing approaches to communicating information with the public about the industries or professions. All six have public websites, make speakers available, or produce information to increase public awareness of their industries. AICPA, CFA Institute, and IIA also engage in advocacy to educate and influence key stakeholders, such as federal and state officials, about policies and issues that affect the profession. For example, CFA Institute publishes policy briefs and analysis on issues affecting investors, including corporate governance and market structure. In addition to the contact named above, Angela Nicole Clowers (Director), Debra Johnson (Assistant Director), Michelle Bowsky (analyst-in-charge), William R. Chatlos, Farrah Graham, Patricia MacWilliams, Patricia Moye, Barbara Roesmann, Bridgette Savino, and Jena Sinkfield made key contributions to this report. JoAnna Berry, James Dalkin, Joseph O’Neill, and Frank Todisco also contributed to this report.
The 2007–2009 financial crisis renewed concerns about the integrity of the credit rating industry. The Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) imposed new requirements on NRSROs and required SEC to implement regulations for training, experience, and competence of credit rating analysts. The Dodd-Frank Act also included a provision for GAO to conduct a study on the merits and feasibility of creating a professional organization for rating analysts employed by NRSROs. This report describes views on (1) the potential merits of and need for a professional organization for credit rating analysts, and (2) any challenges associated with creating and operating such an organization. For this report, GAO reviewed SEC documentation and academic literature; held focus groups with approximately 100 credit rating analysts from different-sized firms who had a range of experience and skills; and interviewed SEC staff, representatives from all 10 NRSROs, and experts and stakeholders (including, academics, investors, advocacy groups, and international regulators). GAO also analyzed the structure and activities of six professional organizations that develop and oversee professional standards and a code of conduct, and interviewed representatives of the organizations. Views varied on the merits of a professional organization for credit rating analysts of nationally recognized statistical rating organizations (NRSRO), but some concluded it was too early to tell if one was needed, in part because of new Securities and Exchange Commission (SEC) requirements on NRSROs to establish standards for their analysts. The analysts, representatives of NRSROs and existing professional organizations, and experts and stakeholders (including academics, investors, advocacy groups, and international regulators) with whom GAO spoke said the merits of such an organization included improving the industry's reputation, enhancing the quality of work done by the professionals, and supplementing existing oversight. However, some said creating such an organization could duplicate existing standards, codes of conduct, or the services provided by other professional organizations. Some said that establishing a professional organization without evaluating the effectiveness of SEC's new regulations (which became effective in June 2015) would be premature. These rules require each NRSRO to establish training, experience, and competence standards to ensure analysts produce accurate ratings and to periodically test analysts' knowledge of the NRSRO's procedures and methodologies. Thus, some held the view that it was too early to determine in what areas a professional organization might add value—that is, add to or complement (rather than duplicate) standards, codes of conduct, training, or oversight—or if one was needed at all. Creating and operating a professional organization for NRSRO credit rating analysts would not be without certain challenges. According to most analysts and representatives of NRSROs and some experts and stakeholders, the challenges primarily would relate to achieving the following aims: Clearly delineated purpose . Delineating the mission or purposes of an organization would be difficult at the present time because the effects of the new SEC regulations were unknown. Adequate funding. Obtaining sufficient funding through membership fees also might be difficult because of the relatively small population of analysts (about 4,500 as of 2014) to provide the fees. Balanced representation. Creating an organizational structure that would provide equitable representation for all members, including from smaller NRSROs, could be challenging because of industry concentration (88 percent of analysts work for 3 of the 10 NRSROs). Meaningful activities. Developing core activities and services, including professional standards, education and training curricula, certification tests, and structures to oversee member compliance could be challenging because of differences in NRSRO methodologies, concerns about sharing confidential information, and analyst specialization in specific rating classes (such as insurance or asset-backed securities).
In September 2005, NASA outlined an initial framework for implementing the President’s Vision for Space Exploration in its Exploration Systems Architecture Study. NASA is now implementing the recommendations from this study within the Constellation Program, which includes three major development projects—the Ares I Crew Launch Vehicle, the Orion Crew Exploration Vehicle, and the Ares V Cargo Launch Vehicle as shown in figure 1. To reduce cost and minimize risk in developing these projects, NASA planned to maximize the use of heritage systems and technology. Since 2005, however, NASA has made changes to the basic architecture for the Ares I and Orion designs that have resulted in the diminished use of heritage systems. This is due to the ability to achieve greater cost savings with alternate technology and the inability to recreate heritage technology. For example, the initial design was predicated on using the main engines and the solid rocket boosters from the Space Shuttle Program. However, NASA is no longer using the Space Shuttle Main Engines because greater long-term cost savings are anticipated through the use of the J-2X engine. In another example, NASA increased the number of segments on the Ares I first-stage reusable solid rocket booster from four to five to increase commonality between the Ares I and Ares V, and eliminate the need to develop, modify, and certify both a four-segment reusable solid rocket booster and an expendable Space Shuttle main engine for the Ares I. Finally, according to the Orion program executive the Orion project originally intended to use the heat shield from the Apollo program as a fallback technology for the Orion thermal protection system, but was unable to recreate the Apollo material. NASA has authorized the Ares I and Orion projects to proceed with awarding development contracts. In April 2006, NASA awarded a $1.8 billion contract for design, development, test, and evaluation of the Ares I first stage to Alliant Techsystems. NASA also awarded a $1.2 billion contract for design, development, test, and evaluation of the Ares I upper stage engine—the J-2X—to Pratt and Whitney Rocketdyne in June 2006. NASA is developing the upper stage and the upper stage instrument unit, which contains the control systems and avionics for the Ares I, in-house. However, NASA awarded a $514.7 million contract for design support and production of the Ares I upper stage to the Boeing Company in August 2007. In August 2006, NASA awarded Lockheed Martin a $3.9 billion contract to design, test, and build the Orion crew exploration vehicle. According to NASA, the contract was modified in April 2007, namely by adding 2 years to the design phase and two test flights of Orion's launch abort system and by deleting the production of an cargo variant for the International Space Station. NASA indicates that these changes increased the contract value to $4.3 billion. Federal procurement data shows that an additional modification has been signed which increased the value of the contract by an additional $59 million. NASA has completed or is in the process of completing key reviews on both the Ares I and Orion projects. NASA has completed the system requirements review for each project and is in the midst of finalizing the system definition reviews. At the systems requirements review, NASA establishes a requirements baseline that serves as the basis for ongoing design analysis work and systems testing. Systems definition reviews focus on emerging designs for all transportation elements and compare the predicted performance of each element against the currently baselined requirements. Figure 2 shows the timeline for Ares I and Orion critical reviews. NASA is using its Web-based Integrated Risk Management Application to help monitor and mitigate the risks with the Ares I and Orion development efforts and for the overall Constellation Program. The risk management application identifies and documents risks, categorizes risks—as high, medium, and low based on both the likelihood of an undesirable event as well as the consequences of that event to the project—and tracks performance against mitigation plans. For the Ares I project, the application is tracking 101 risks, 36 of which are considered high-risk areas. For the Orion project, NASA is tracking 193 risks, including 71 high- risk areas. NASA is developing and implementing plans to mitigate some of these risks. Although project level requirements were baselined at both systems requirements reviews, continued uncertainty about the systems’ requirements have led to considerable unknowns as to whether NASA’s plans for the Ares I and Orion vehicles can be executed within schedule goals, as well as what these efforts will ultimately cost. Such uncertainty has created knowledge gaps that are affecting many aspects of both projects. Because the Orion vehicle is the payload that the Ares I must deliver to orbit, changes in the Orion design, especially those that affect weight, directly affect Ares I lift requirements. Likewise, the lift capacity of the Ares I drives the Orion design. Both the Orion and Ares I vehicles have a history of weight and mass growth, and NASA is still defining the mass, loads, and weight requirements for both vehicles. According to agency officials, continuing weight growth led NASA to rebaseline the Orion vehicle design in fall 2007. This process involved “scrubbing” the Orion Vehicle to establish a zero-based design capable of meeting minimal mission requirements but not safe for human flight. Beginning with the zero-based design NASA first added back the systems necessary to ensure crew safety and then conducted a series of engineering trade-offs to determine what other systems should be included to maximize the probability of mission success while minimizing the system’s weight. As a result of these trade-offs, NASA modified the requirement for nominal landing on land to nominal landing in water, thereby gaining 1500 lbs of trade space in the Orion design. NASA recognizes that continued weight growth and requirements instability are key risks facing the Orion project and that continued instability in the Orion design is a risk facing the Ares I project. The Ares I and Orion projects are working on these issues but have not yet finalized requirements or design. Our previous work on systems acquisition work shows that the preliminary design phase is an appropriate place to conduct systems engineering to support requirement and resource trade- off decisions. For the Ares I project, this phase is scheduled to be completed in August 2008, whereas for the Orion project, it is September 2008—leaving NASA only 4 and 5 months respectively to close gaps in requirements knowledge. NASA will be challenged to close such gaps, given that it is still defining requirements at a relatively high level and much work remains to be done at the lower levels. Moreover, given the complexity of the Orion and Ares I efforts and their interdependencies, as long as requirements are in flux, it will be extremely difficult to establish firm cost estimates and schedule baselines. Currently, nearly every major segment of Ares I and Orion faces knowledge gaps in the development of required hardware and technology and many are being affected by uncertainty in requirements. For example, computer modeling is showing that thrust oscillation within the first stage of the Ares I could cause excessive vibration throughout the Ares I and Orion. Resolving this issue could require redesigns to both the Ares I and Orion vehicles that could ultimately impact cost, schedule, and performance. Furthermore, the addition of a fifth segment to the Ares I first stage has the potential to impact qualification efforts for the first stage and could result in costly requalification and redesign efforts. Additionally, the J-2X engine represents a new engine development effort that, both NASA and Pratt and Whitney Rocketdyne recognize, is likely to experience failures during development. Addressing these failures is likely to lead to design changes that could impact the project's cost and schedule. With regard to the Orion project, there is currently no industry capability for producing a thermal protection system of the size required by the Orion. NASA has yet to develop a solution for this gap, and given the size of the vehicle and the tight development schedule, a feasible thermal protection system may not be available for initial operational capability to the space station. The table 1 describes these and other examples of knowledge gaps in the development of the Ares I and Orion vehicles. NASA’s preliminary cost estimates for the Constellation Program are likely to change when requirements are better defined. NASA will establish a preliminary estimate of life cycle costs for the Ares I and Orion in support of each project’s system definition review. A formal baseline of cost, however, is not expected until the projects’ preliminary design reviews are completed. NASA is working under a self-imposed deadline to deliver the new launch vehicles no later than 2015 in order to minimize the gap in human spaceflight between the Space Shuttle’s retirement in 2010 and the availability of new transportation vehicles. The Constellation Program’s budget request maintains a confidence level of 65 percent (i.e., NASA is 65 percent certain that the actual cost of the program will either meet or be less than the estimate) for program estimates based upon a 2015 initial operational capability. Internally, however, the Ares I and Orion projects are working toward an earlier initial operational capability (2013), but at a reduced budget confidence level—33 percent. However, NASA cannot reliably estimate the money needed to complete technology development, design, and production for the Ares I and Orion projects until requirements are fully understood. NASA has identified the potential for a life cycle cost increase as a risk for the Orion program. According to NASA’s risk database, given the historical cost overruns of past NASA systems and the known level of uncertainty in the current Orion requirements, there is a possibility that Orion's life cycle cost estimate may increase over time. NASA acknowledges that such increases are often caused by the unknown impacts of decisions made during development. One factor currently contributing to cost increases is the addition of new requirements. NASA is working to formulate the best life cycle cost estimate possible during development, is identifying and monitoring costs threats, and is implementing management tools all aimed at addressing this risk. There are considerable schedule pressures facing both the Ares I and Orion projects. These are largely rooted in NASA’s desire to minimize the gap between the retirement of the space shuttle and availability of the new vehicles. Because of this scheduling goal, NASA is planning to conduct many interdependent development activities concurrently—meaning if one activity should slip in schedule, it could have cascading effects on other activities. Moreover, some aspects of the program are already experiencing scheduling delays due to the fact that high-level requirements are still being defined. The development schedule for the J-2X is aggressive, allowing less than 7 years from development start to first flight, and highly concurrent. Due to the tight schedule and long-lead nature of engine development, the J-2X project was required to start out earlier in its development than the other elements on the Ares I vehicle. This approach has introduced a high degree of concurrency between the setting of overall Ares I requirements and the development of the J-2X design and hardware. Consequently, the engine development is out of sync with the first stage and upper stage in the flow-down and decomposition of requirements, an approach our past work has shown to be fraught with risk. NASA acknowledges that the engine development is proceeding with an accepted risk that future requirements changes may affect the engine design and that the engine may not complete development as scheduled in December 2012. The J-2X development effort represents a critical path for the Ares I project. Subsequently, delays in the J-2X schedule for design, development, test, and evaluation would have a ripple effect of cost and schedule impacts throughout the entire Ares I project. The schedule for the first stage also presents a potential issue for the entire Ares I project. Specifically, the critical design review for the first stage is out of sync with the Ares I project-level critical design review. NASA has scheduled two critical design reviews for the first stage. The first critical design review is scheduled for November 2009, 5 months before the Ares I project critical design review. At this point, however, the project will not have fully tested the first stage development motors. The second critical design review, in December 2010, occurs after additional testing of developmental motors is conducted. By conducting the Ares I critical design review before the first stage project critical design review, the project could prematurely begin full-scale test and integration activities a full 9 months before the first stage design has demonstrated maturity. If problems are found in the first stage design during the later testing, implementing solutions could result in costly rework and redesign and delay the overall project schedule. Cost and schedule reporting on the Orion project indicates that the Orion project’s efforts to mature requirements and design and to resolve weight issues is placing pressure on the Orion schedule. Specifically, activities aimed at assessing alternate designs to reduce overall vehicle mass, rework to tooling concepts, and late requirements definition have contributed to the project falling behind schedule. Further, the Orion risk system indicates that schedule delays associated with testing may occur. The current Orion design has high predicted vibration and acoustic levels. Historically, components designed and qualified for uncertain vibration and acoustic environments have resulted in some failures and required subsequent redesign and retest. Failures during qualification testing of Orion components may lead to schedule delays associated with redesigning components. NASA’s Administrator has publicly stated that if Congress provided the Agency an additional $2 billion that NASA could accelerate the Constellation program’s initial operational capability date to 2013. We believe that this assessment is highly optimistic. The development schedule for the J-2X engine, the critical path for the Ares I development, is already recognized as aggressive, allowing less than 7 years for development. The development of the Space Shuttle Main engine by comparison took 9 years. Further, NASA anticipates that the J-2X engine is likely to require 29 rework cycles to correct problems identified during testing. Given the linear nature of a traditional test-analyze-fix-test cycle, even large funding increases offer no guarantee of program acceleration, particularly when the current schedule is already compressed and existing NASA test facilities are already maximized. According to NASA, at this time, existing test facilities are insufficient to adequately test the Ares I and Orion systems. Existing altitude test facilities are insufficient to test the J-2X engine in a relevant environment. To address this issue, NASA is in the process of constructing a new altitude test facility at Stennis Space Center for the J-2X. Also, current facilities are inadequate to replicate the Orion vibration and acoustic environment. Further, Pratt and Whitney Rocketdyne—the J-2 X upper stage engine contractor—indicated that existing test stands that could support J-2X testing will be tied up supporting the Space Shuttle program until 2010. NASA has taken steps to mitigate J-2X risks by increasing the amount of component-level testing, procuring additional development hardware and test facilities, and working to make a third test stand available to the contractor earlier than originally planned. NASA has compensated for this schedule pressure on the Ares I project by adding funds for testing and other critical activities. But it is not certain that added resources will enable NASA to deliver the Ares I when expected. With respect to Orion’s thermal protection system, facilities available from the Apollo era for testing large-scale heat shields no longer exist. Therefore, NASA must rely on two facilities that fall short in providing the necessary capability and scheduling to test ablative materials needed for Orion. Additionally, NASA has no scheduled test to demonstrate the thermal protection system needed for lunar missions. NASA is exploring other options, including adding a lunar return flight test and building a new improved test facility. Due to the scheduled first lunar flight, any issues identified during such testing would need to be addressed in the time between the flight test and the first flight. NASA is poised to invest a significant amount of resources to implement the Vision over the long term and specifically to develop the Ares I and Orion projects over the next several years. Accordingly, you asked us to articulate indicators that Congress could use to assess progress. Our prior work has shown that investment decisions of this magnitude need to be based on an established and executable business case and that there are several key indicators that Congress could be informed of to assess progress throughout development. These include areas commonly underestimated in space programs, such as weight growth and software complexity, as well as indicators used by best practice organizations to assess readiness to move forward in the development cycle. Space programs which we have studied in detail in the past have tended to underestimate cost in some of these areas. Our previous work on government-funded space systems has shown that weight growth is often not anticipated even though it is among the highest drivers of cost growth for space systems. Weight growth can affect the hardware needed to support a system, and, in the case of launch vehicles, the power or thrust required for the system. As the weight of a particular system increases, the power or thrust required for that system will also increase. This could result in the need to develop additional power or thrust capability to lift the system, leading to additional costs, or to stripping down the vehicle to accommodate current power or thrust capability. For example, NASA went through the process to zero-base the design for the Orion to address weight concerns. Continual monitoring of system weight and required power/thrust, as well as margins or reserves for additional growth, can provide decision makers with an indicator of whether cost increases can be anticipated. The complexity of software development on a system, often denoted by the number of lines of code on a system, can also be used as an indicator to monitor whether a program will meet cost and schedule goals. In our work on software development best practices, we have reported that the Department of Defense has attributed significant cost and schedule overruns on software-intensive systems to developing and delivering software. Generally, the greater the number of lines of code, the more complicated the system development. Changes to the amount of code needed to be produced can indicate potential cost and schedule problems. Decision makers can monitor this indicator by continually asking for information on the estimated amount of code needed on a system and inquiring about any increases in need and their impact on cost and schedule. There are other areas, such as the use of heritage systems and industrial base capability that are commonly underestimated in space programs as well. However, weight increases and software growth are more quantifiable and thus useful for oversight purposes. Indicators that Can be Used to Assess Knowledge Gap at Key Junctures Additionally, since the mid-1990s, GAO has studied the best practices of leading commercial companies. On the basis of this information, and taking into account the differences between commercial product development and major federal acquisitions, we have outlined a best practices product development model—known as a knowledge-based approach to system development. This type of approach calls for investment decisions to be made on the basis of specific, measurable levels of knowledge at critical junctures before investing more money and proceeding with development. Importantly, our work has shown the most leveraged decision point is matching the customer’s needs with the developer’s resources (time, dollars, technology, people, etc.) because it sets the stage for the eventual outcome—desirable or problematic. The match is ultimately achieved in every development program, but in successful development programs, it occurs before product development is formally initiated (usually the preliminary design review). If the knowledge attained at this and other critical junctures does not confirm the business case on which the acquisition was originally justified, the best practice organizations we have studied do not allow the program to go forward. We have highlighted the three critical junctures at which developers must have knowledge to make large investment decisions—the preliminary design review, the critical design review, and the production review—and the numerous key indicators that can be used to increase the chances of successful outcomes. In assessing the Orion and Ares programs, the Congress and NASA decision-makers can use these indicators in order to reliably gauge whether there is a sufficient business case for allowing the programs to proceed forward. Preliminary design review: Before product development is started, a match must be made between the customers’ needs and the available resources—technical and engineering knowledge, time, and funding. To provide oversight at this juncture, NASA could provide Congress with information to verify that the following have indicators been met: All critical technologies are demonstrated to a high level of technology maturity, that is demonstrated that they can perform in a realistic or, more preferably, operational environment. A technology readiness level 6 or 7 would indicate that this has been achieved. One approach to ensure that technology readiness is reliably assessed is to use independent testing; Project requirements are defined and informed by the systems Cost and schedule estimates established for the project are based on knowledge from the preliminary design using systems engineering tools; Additional resources are in place, including needed workforce, and a decision review is conducted following completion of the preliminary design review. A critical enabler for success in this phase of development is performance and requirements flexibility. Customers and product developers both need to be open to reducing expectations, deferring them to future programs, or to investing more resources up front to eliminate gaps between resources and expectations. In successful programs we have studied, requirements were flexible until a decision was made to commit to product development because both customers and developers wanted to limit cycle time. This makes it acceptable to reduce, eliminate, or defer some customer wants so that the product’s requirements could be matched with the resources available to deliver the product within the desired cycle time. Critical design review: A product’s design must demonstrate its ability to meet performance requirements and be stable about midway through development. To provide oversight at this juncture, NASA could provide Congress with information to verify that the following indicators have been met: At least 90 percent of engineering drawings are complete; All subsystem and system design reviews have been completed; The design meets requirements demonstrated through modeling, simulation, or prototypes; Stakeholders’ concurrence that drawings are complete and producible Failure modes and effects analysis have been completed; Key system characteristics are identified; Critical manufacturing processes are identified; Reliability targets are established and a growth plan based on demonstrated reliability rates of components and subsystems is developed; and A decision review is conducted following the completion of the critical design review. Production Review: The developer must show that the product can be manufactured within cost, schedule, and quality targets and is demonstrated to be reliable before production begins. To provide oversight at this juncture, NASA could provide Congress with information to verify that the following indicators have been met: Manufacturing processes have been demonstrated; Production representative prototypes have been built; Production representative prototypes have been tested and have Production representative prototypes have been demonstrated in an operational environment through testing; Statistical process control data have been collected; Critical processes have been demonstrated to be capable and that they are in statistical control; A decision review is conducted following completion of the production readiness review. Over the past 2 years, we have recommended that NASA incorporate a knowledge-based approach in its policies and take steps to implement this type of approach in its programs and projects. NASA has incorporated some knowledge-based concepts into its acquisition policies. For example, NASA now requires a decision review between each major phase of the acquisition life cycle and has established general entrance and success criteria for the decision reviews. In addition, we have reported that this type of approach is being embraced by the Ares I project. In conclusion, the President’s Vision for Space Exploration is an ambitious effort, not just because there will be technical and design challenges to building systems needed to achieve the Vision’s goals, but because there are limited resources within which this can be accomplished. Moreover, the long-term nature of the Vision means that commitments for funding and to the goals of the Vision will need to be sustained across presidential administrations and changes in congressional leadership. For these reasons, it is exceedingly important that the right decisions are made early on and that decision-makers have the right knowledge going forward so that they can make informed investment decisions. In looking at the first major investments, the Ares I and Orion projects, it is important to recognize that they are risky endeavors, largely due to their complexity, scope, and interdependencies. It is also important to recognize that the desire to minimize the gap in human space flight adds considerable risk, since it could limit NASA's ability to study emerging problems and pursue alternative ways of addressing them. For these reasons, as well as the magnitude of investment at stake, it is imperative that NASA be realistic and open about the progress it is making and to be willing to make changes to the architecture and design if technical problems can not be solved without overly compromising performance. Additionally, Congress needs to be well-informed about the extent to which knowledge gaps remain and what tradeoffs or additional resources are needed to close those gaps and to support changes if they are determined to be necessary. The upcoming preliminary design review milestones represent perhaps the most critical juncture where these assessments can take place and where hard decisions can be made as to whether the programs should proceed forward. It may well be the last opportunity to make significant adjustments before billions of dollars are spent and long term commitments become solidified. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. For further questions about this statement, please contact Cristina T. Chaplain at (202) 512-4841. Individuals making key contributions to this statement include James L. Morrison, Meredith A. Kimmitt, Lily Chin, Neil Feldman, Rachel Girshick, Shelby S. Oakley, and John S. Warren, Jr. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The National Aeronautics and Space Administration (NASA) is in the midst of two new development efforts as part of the Constellation Program--the Ares I Crew Launch Vehicle and the Orion Crew Exploration Vehicle. These projects are critical to the success of the overall program, which will return humans to spaceflight after Space Shuttle retirement in 2010. To reduce the gap in human spaceflight, NASA plans to launch Ares I and Orion in 2015--5 years after the Shuttle's retirement. GAO has issued a number of reports and testimonies that touch on various aspects of NASA's Constellation Program, particularly the development efforts underway for the Orion and Ares I projects. These reports and testimonies have questioned the affordability and overall acquisition strategy for each project. NASA has revised the Orion acquisition strategy and delayed the Ares I preliminary design review based on GAO's recommendations in these reports. In addition, GAO continues to monitor these projects on an ongoing basis at the request of members of Congress. Based on this work, GAO was asked to testify on the types of challenges that NASA faces in developing the Ares I and Orion vehicles and identify the key indicators that decision makers could use to assess risks associated with common trouble spots in development. The information in this testimony is based on work completed in accordance with generally accepted government auditing standards. NASA is currently working toward preliminary design reviews for the Ares I and Orion vehicles. While this is a phase for discovery and risk reduction, there are considerable unknowns as to whether NASA's plans for these vehicles can be executed within schedule goals and what these efforts will ultimately cost. This is primarily because NASA is still in the process of defining many performance requirements. Such uncertainties could affect the mass, loads, and weight requirements for the vehicles. NASA is aiming to complete this process in 2008, but it will be challenged to do so given the level of knowledge that still needs to be attained. The challenges NASA is facing pose risks to the successful outcome of the projects. For example: both vehicles have a history of weight issues; excessive vibration during launch threatens system design; Uncertainty about how flight characteristics will be impacted by a fifth segment added to the Ares I launch vehicle; Ares I upper stage essentially requires development of a new engine; no industry capability currently exists for producing the kind of heat shields that the Orion will need for protecting the crew exploration vehicle when it reenters Earth's atmosphere; and existing test facilities are insufficient for testing Ares I's new engine, for replicating the engine's vibration and acoustic environment, and for testing the thermal protection system for the Orion vehicle. All these unknowns, as well as others, leave NASA in the position of being unable to provide firm cost estimates for the projects at this point. Meanwhile, tight deadlines are putting additional pressure on both the Ares I and Orion projects. Future requirements changes raise risks that both projects could experience cost and schedule problems. GAO's past work on space systems acquisition and the practices of leading developers identifies best practices that can provide decision makers with insight into the progress of development at key junctures, facilitate Congressional oversight, and support informed decision making. This work has also identified common red flags throughout development, which decision makers need to keep in mind when assessing the projects. They include: Key indicators: Weight growth is often among the highest drivers of cost growth. Unanticipated software complexity, often indicated by increases in the number of lines of code, can portend cost and schedule growth. Key junctures: The preliminary design review, critical design review, and production review are key junctures that involve numerous steps and help focus the agency on realistic accomplishments within reachable goals. A disciplined approach aligned with key indicators can provide the knowledge needed to make informed investment decisions at each review.
Forms of business organization and their tax treatment Partnership: generally an unincorporated entity with two or more members that conducts a business, does not pay income taxes, but rather passes income or losses through to their partners, which must include that income or loss on their income tax returns. C Corporation: a corporation that is generally taxed at the entity level under subchapter C of the Internal Revenue Code (IRC). S Corporation: a corporation that meets certain requirements and elects to be taxed under subchapter S of the IRC, which provides that in general income and losses be passed through to its shareholders. For tax purposes, a partnership is generally an unincorporated organization with two or more members that conducts a business and divides profits. Partnerships generally report their income on Form 1065, U.S. Return of Partnership Income. Partnerships usually do not pay income taxes but pass—or allocate—the net income or losses to partners, who pay any applicable taxes. Partnerships report the share of income or losses accruing to each partner on a Schedule K-1 with copies going to the partners and to IRS. Partners can be individuals or other entities such as corporations or other partnerships. Having no statutory, IRS, or industry-accepted definition of a large partnership, we have defined a large partnership in two ways: 1) as having 100 or more direct and indirect partners and $100 million or more in assets, and 2) as having 100 or more direct partners and $100 million or more in assets. Including just direct partners does not capture the entire size and complexity of large partnership structures. Accounting for indirect partners does, but it also raises the issue of counting income and assets more than once (described below). In this report, we generally use the definition that includes direct and indirect partners but sometimes use both definitions when the distinction might matter. Partnerships can be structured as tiers of pass-through entities creating direct and indirect partners. Table 1 defines key terms for a partnership structure. See figure 1 for an example of a simple tiered partnership structure. For a partnership structure with multiple partners linked in networks with other partnerships, the partners, assets, and income may be counted more than once as seen in figure 2. Large partnership audits typically involve two separate steps. One step is the field audit, which is a detailed examination of the partnership’s tax return (Form 1065) and supporting books and records to determine whether income and losses are properly reported. The field audit may recommend adjustments to the income and losses. The other step is called a campus audit.partnerships to the tax returns of their direct and indirect partners. Adjustments to income or losses from the field audit may be passed through to the taxable partners responsible for paying any additional tax, based on the partners’ shares in the partnership. Although IRS counts campus audits as audits, they usually do not involve an examination of a taxpayer’s books and records. In response to concerns about IRS’s ability to audit partnership returns, Congress enacted specific rules regarding partnerships audits in TEFRA. TEFRA audit procedures were intended to streamline IRS’s partnership audit process while ensuring the rights of all partners. Before TEFRA, IRS audited partners separately, leading to inconsistent treatment and making it hard to detect tax shelters. According to the congressional Joint Committee on Taxation (JCT), the complexity and fragmentation of the audits—especially for large partnerships with partners in many locations—led to some audits of partners’ returns ending at varying times and some partners paying additional taxes while others did not. See table 2 for key features of TEFRA. Ideally, once a TEFRA audit begins, it would proceed as outlined in figure 3. According to the U.S. Department of Treasury (Treasury) and IRS, applying TEFRA to audits of large partnerships became an intensive and inefficient use of limited IRS resources as IRS spent more and more time on administrative tasks. As a result, Congress established the Electing Large Partnerships (ELP) procedures as part of the Taxpayer Relief Act of 1997. In general, the procedures apply to partnerships with 100 or more direct partners in a taxable year that elect this alternative reporting and audit framework. The ELP audit procedures differ from the TEFRA procedures in two key ways: partnerships (1) may pay tax on audit adjustments instead of the partners, and (2) must report fewer items to the partners. As we have previously reported, IRS’s appropriations declined by $855 million, or 7 percent, and IRS staffing declined by more than 10,000 full- time equivalents, or 11 percent, between fiscal years 2010 and 2014.Most of this staffing decline occurred in IRS enforcement, which is responsible for ensuring that tax returns, including partnership returns, comply with the tax laws. During tax years 2002 through 2011, the number of partnerships and S corporations of all sizes increased 47 percent and 32 percent, respectively, while the number of C corporations decreased 22 percent.See figure 4. During tax years 2002 through 2011, the number of large partnerships with 100 or more direct and indirect partners as well as $100 million or more in assets more than tripled to 10,099—an increase of 257 percent. Over the same years, total assets of these large partnerships (without accounting for double counting) increased 289 percent to almost $7.5 trillion.for various asset sizes. Almost two-thirds of large partnerships had 1,000 or more direct and indirect partners in tax year 2011, but hundreds of large partnerships had more than 100,000 partners. Large partnerships with the most direct and indirect partners had the greatest increase from tax years 2002 to 2011. See figure 6. The number of large partnerships varies considerably from year to year due in part to investment choices made by other large partnerships. One IRS official said that the partnerships with more than a million partners increased from 17 in tax year 2011 to 1,809 in tax year 2012. The official attributed most of the increase to a small number of investment funds that expanded their interests in other partnerships. If those investment funds choose to divest their interests in other partnerships, the number of large partnerships would decrease significantly. Tiering contributes to complexity. In tax year 2011, more than two-thirds of large partnerships had at least 100 or more pass-through entities and 36 percent had at least 1,000 or more pass-through entities as direct and indirect partners. These pass-through entities may be direct partners or may exist at various tiers below the direct partners. There is some evidence that large partnership structures are becoming more complex. In tax year 2011, 78 percent of the large partnerships had six or more tiers compared to 66 percent in tax year 2002. Tiering complicates determining the relationships and allocations of income and losses within a large partnership structure. For example, in figure 7, the allocation from the audited partnership on the far left side of the figure passes through eight partnerships along the bolded path before it reaches one of its ultimate owners on the right. This path also may not be the only path from the audited partnership to the ultimate owner. While this example of a partnership structure is complex, it has only 50 partners and 10 tiers. Large partnership structures could be much more complex. In 2011, 17 large partnerships had more than a million partners. According to an IRS official, several large partnerships have more than 50 tiers. In tax year 2011, about 73 percent of large partnerships reported being in the finance and insurance sector, up from 64 percent in tax year 2002 (see table 3). About 87 percent of those in the finance and insurance sector in tax year 2011 engaged in financial investment activities, of which about 70 percent reported $1 billion or more in assets. As we previously found, many of the large partnerships in the finance and insurance sector are investment funds, such as hedge funds and private equity funds, which are pools of assets shared by investors. See appendix II for additional data on the number and characteristics of large partnerships. IRS audits few large, complex partnerships. According to IRS data, in fiscal year 2012, IRS closed 84 field audits—or a 0.8 percent audit rate.This audit rate is well below that of C corporations with $100 million or more in assets, which was 27.1 percent in fiscal year 2012. See table 4. This audit rate does not depend on whether large partnerships are defined to include direct and indirect partners or only direct partners. Our interim report, which focused on only direct partners in defining large partnerships, also showed a 0.8 percent audit rate in 2012. It is possible that some large partnership audits in table 4 are audits of different partnerships within the same large partnership structure. For example, if IRS audits one large partnership and then discovers that it needs to audit another large partnership in the same complex structure, those would count as two separate audits. Available IRS data did not allow us to determine how often this occurred. Table 5 shows that most large partnership field audits closed from fiscal years 2007 through 2013 did not find tax noncompliance. In 2013, for example, 64.2 percent of the large partnership audits resulted in no change to the reported income or losses. In comparison, IRS audits of C corporations with $100 million or more in assets had much lower no change rates, as also shown in table 5. In addition, IRS audits of all partnerships, not just large partnerships, also had a lower no change rate of 47 percent in fiscal year 2013. According to IRS focus group participants, large partnership returns have the potential for a high tax noncompliance risk. However, it is not clear whether the high no change rate for large partnership audits is due to IRS selecting large partnerships that were tax compliant or is due to an inability of IRS audits to identify noncompliance, as discussed below. When field audits of large partnerships resulted in changes, the aggregate amount was minimal, as shown in table 6. This could be because positive changes on some audits were cancelled out by negative changes on other audits. In 3 of the 7 years shown in table 6, the total adjustments from the field audits were negative; that is, they favored the large partnerships being audited. This did not occur for audits of large corporations. IRS data show that its large partnership audits used fewer resources than large corporate audits, but still required significant audit staff time, as shown in table 7. These field audit time measures for large partnerships do not cover all audit costs, such as the time spent passing through audit adjustments at the campus. For example, if the campus passes through audit adjustments to 100 partners of one large partnership (which means opening 100 campus audits), the total cost of the related large partnership audit may be significantly larger than the field audit time accounted for partnerships in table 7. However, the campus does not track the total hours spent working all the partners’ returns related to a partnership return audited in the field due to limitations associated with IRS information systems, which are discussed below. Campus officials noted that working returns related to large partnerships require significant time and resources given their growing complexity and size. IRS data that report audit results for partnerships do not break out large partnerships, which would help inform audit resource allocation decisions. Without such a break out, our analysis of audit data for large partnerships relied on combining various databases. According to standards for internal control in the federal government, managers need accurate and complete information to help ensure efficient and effective use of resources. IRS officials acknowledged that they need a better understanding of large partnership audits to improve resource allocation. The problem arises because IRS’s audit data codes—known as activity codes—are not specific enough to identify large partnerships. IRS uses its activity codes to set goals for the number of returns IRS plans to audit in a fiscal year and track audit results. Because they do not identify large partnership returns, the current activity codes do not allow IRS to do the kind of analysis needed to plan resource usage, including the level of audit and support staff needed, for large partnership audits. IRS has developed new activity codes that would distinguish partnership returns based on asset size and the type of income reported; but they are not scheduled to begin reporting the new activity codes until fiscal year 2017 due to resource limitations. IRS has not decided which activity codes would be used to define large partnership returns. Further, the new activity codes do not account for the number of partners, which IRS has identified as a major driver of resource usage for large partnership audits—particularly for the work done at the campus. If these IRS activity codes do not account for such a driver of resource usage, IRS will not be able to make effective resource allocation decisions. Revising the activity codes requires defining a large partnership for audit purposes, which IRS has not done. The exact definition matters less than ensuring that one definition is consistently applied so that there is agreement on the scope of large partnership audit efforts and results can be assessed. IRS also does not distinguish between field audits and campus audits in counting the number of large partnership audits. When calculating its audit rate for all partnerships, IRS accounts for both field audits and campus audits, which misrepresents the number of audits that actually verify information reported on tax returns. Unless IRS separately accounts for field and campus audits, it cannot accurately measure audit results. IRS officials said that they do not have sufficient data on the results from field audits of large partnerships to know what is driving the high no Our focus groups with IRS field change rate and minimal tax changes.auditors and interviews with IRS officials, however, provided insights on the challenges to finding noncompliance in field audits. The complexities arising from large partnership structures challenge IRS’s ability to identify tax noncompliance. For example, IRS officials reported having difficulty identifying the business purpose for the large partnerships or knowing which entity in a tiered structure is generating the income or losses. In these cases, IRS auditors said they do not know with which partner or tier of the partnership structure to start the audit. Without finding the source of income and losses, it is difficult for IRS to determine whether a tax shelter exists, an abusive tax transaction is being used, or if income and losses are being properly characterized. Focus group participants said that complex structures could mask tax noncompliance. For example, one participant said I think noncompliance of large partnerships is high because a lot of what we have seen in terms of complexity and tiers of partnership structures… I don’t see what the driver is to create large partnership structures other than for tax purposes to make it difficult to identify income sources and tax shelters. IRS officials stated that determining compliance is especially challenging when auditing the returns of hedge funds that often have an interest in many other partnerships and in investments—such as financial derivatives that are complex and constantly changing, and can involve noncompliance, as we have previously found. Focus group participants noted that the complexity requires them to invest extensive time to research and understand the structure of large partnerships and technical tax issues. For example, one focus group participant noted the difficulty of auditing the returns of large, complex partnerships: … income is pushed down so many tiers, you are never able to find out where the real problems or duplication of deductions exist. The reporting of income, expenses could be duplicated but there is no way to figure it out unless you drill down and audit all tiers, all tax returns. As the number and complexity of large partnerships have increased, aspects of the TEFRA audit procedures have become an impediment, according to focus groups and interviews with IRS officials. TEFRA Reduces Time IRS Can Actually Spend on Audits IRS focus group participants stated that the interaction of TEFRA procedures with increasingly complex partnership structures has reduced the amount of time to effectively audit the return within the statute of limitations. A 3-year statute of limitations governs the time in which IRS must complete its audits of partnerships, which begins on the due date of the return or date of return filing, whichever is later. IRS on average takes about 18 months after a large partnership return is received until the audit is started, leaving another 18 months to actually conduct an audit, as illustrated in figure 8. After the field audit, TEFRA generally requires that any audit adjustments be passed through to partners within a 1-year assessment period. Focus group participants said that they sometimes run out of time on the statute of limitations and have to close the audit as no change. IRS has no data that supports this claim as it does not break out audit results by large partnerships, as discussed earlier. Identifying the Tax Matters Partner (TMP) Can Take Months TEFRA defines the responsibilities of the TMP, such as providing information to IRS as well as communicating with partners, which generally help to facilitate the audit. Without a TMP, IRS is not able to conduct the audit. Further, if the audit proceeds without a qualified TMP, the partners may challenge any settlement agreed to between the IRS and the unqualified TMP at the conclusion of the audit, so IRS takes steps to ensure that any TMP is a qualified TMP. IRS focus group participants said that identifying a qualified TMP is a primary challenge in large partnership audits. The burden of doing so falls largely on IRS, taking time and effort away from doing the actual audit work. For example, TEFRA does not require partnerships to designate a TMP on their returns. In addition, TEFRA allows the TMP to be an entity, not a person. In either case, IRS auditors spend time requesting that the partnership designate a TMP or tracking down an actual person to act as a representative for the TMP—unless the partnership chose to list that person on the Form 1065. IRS focus group participants cited various reasons for not being able to immediately identify the TMP. Some said that large partnerships are purposely unclear about the TMP as an audit- delay strategy. As one participant said: Entities will often be elusive about designating the Tax Matters Partner. The entities will use this tactic as a first line of defense against an audit. If a large partnership does not designate a TMP on the partnership return, IRS will provide the partnership the opportunity to do so. If the partnership does not, TEFRA requires that the partner with the largest profit interest automatically becomes the TMP. However, if IRS determines that it is impracticable to apply this rule, then IRS may designate the TMP.before giving the partnership an opportunity to designate a TMP because the IRS designation may be opposed by the partners and IRS needs to collect information to find a partner that meets the criteria. As a consequence, exercising this authority can still mean that the start of audit is delayed. However, if IRS were to move directly to the largest profit interest rule or chose to designate the TMP using its existing authority instead of reaching out to the partnership, IRS could save valuable time during the average 18-month window it has to completes the audit. IRS does not track data on the time spent identifying the TMP. Focus group participants said that identifying and qualifying the designated TMP could take weeks or months. Losing a few months from the 18 months to audit a large partnership could be significant to IRS field auditors. IRS officials said that they are hesitant to do so IRS officials said IRS has issued new job aids and training on identifying the TMP, clarified TMP requirements, and added a section about TEFRA in the December 2013 revision of IRS Publication 541, Partnerships. However, these steps do not solve the fundamental problem of limited audit time being lost while identifying a TMP. A legislative change to TEFRA requiring all large partnerships to designate a TMP on their tax returns and to provide updated TMP information to IRS once an audit starts would solve the problem. Without such a change, IRS’s field audits of large partnerships are inefficient which hinders its ability to fulfill its mission of ensuring tax law compliance. The costs of such a legislative change should be low. There would be no increased costs to IRS—the change would reduce IRS’s costs. Partnerships already have partners responsible for filing tax returns so designating a TMP should not be onerous. IRS Has Difficulty Utilizing the 45-Day Rule Due to Complexity of Large Partnerships Another TEFRA audit procedure that is meant to benefit IRS but is difficult to use is the TEFRA 45-day rule.withdraw its notification to the TMP about the start of the audit, within 45 IRS's TEFRA regulations allow IRS to days of notifying the TMP. If IRS does so within 45 days, IRS can close the audit as no change without having to notify the partners. IRS focus group participants said that they often do not have sufficient information to determine whether to close an audit within 45 days. In addition, time spent identifying the TMP reduces the time available to make this determination. As a result, of the 61 field audits of large partnerships closed in fiscal year 2013 as no change, none were closed within the 45-day period. its regulation to lengthen this period for withdrawing an audit notice beyond 45 days without having to notify all the partners of the withdrawal. Extending the notice withdrawal period would save IRS audit resources and allow the resources to be more effectively used in ensuring tax law compliance. IRS has the authority under TEFRA to amend Passing Through Audit Adjustments to Numerous Partners May Not be Worth the Effort TEFRA generally requires that audit adjustments be passed through to the taxable partners. Although IRS does not track the costs for the campus to pass field audit adjustments through to the partners, campus officials said the costs are high for a number of reasons. The process of linking partnership returns (Forms 1065), Schedule K- 1s, and partners’ returns (Forms 1040) is largely manual and paper driven. According to IRS officials, the campus information systems do not have the capability to automate the process. Paper copies of all these returns must be retrieved and linked in a very labor intensive process. The portion of the partnership audit adjustment that gets passed through to each partner must be manually determined by using the ownership share reported on the relevant Schedule K-1. The regulation specifies that the 45-day rule starts when IRS notifies the TMP of the partnership audit, which likely occurs after the audit start date. However, IRS does not track this notification date. Thus, we counted how many audits closed within 45 days of the audit start date as a proxy. The Schedule K-1 information may not always be accurate, as we have previously found, requiring IRS to contact the partnership and review the partnership agreement to clarify ownership shares among the partners. A copy of the partnership agreement must usually be requested from the partnership being audited and for potentially each partnership within the partnership structure that is linked. Partnership agreements may include special allocation for some income items that supersede the ownership interest reported on the Schedule K-1. Finding special allocations requires detailed reviews of the partnership agreements of the partnerships within the partnership structure. According to IRS officials, this step cannot be automated. IRS officials also said that partnerships could provide special allocation schedules to IRS, which would eliminate the need to review the entire agreement. As a consequence, the process for passing audit adjustments through to partners is costly and very time consuming. This limits the number of large partnerships that IRS can audit. Furthermore, since IRS generally has one year after the 3-year statute of limitations ends to pass through adjustments, the campus has to start linking returns before it knows whether there will be an audit adjustment or whether an adjustment will be large enough to merit passing through. In a large partnership, dividing the adjustment among hundreds or thousands of partners may result in amounts that are so small that IRS deems them not worth the cost to pass through. IRS officials said that they do not track how often the audit adjustments are not passed through and how much unpaid tax is not collected as a result. IRS campus officials estimated that they close 50,000 to 60,000 returns for all partnerships each year and further estimated that 65 to 70 percent of these are closed without passing through any adjustment. IRC § 704(b) provides partnerships the option to use special allocations. They are generally listed in the partnership agreement that specifies such things as the partnership’s name and purpose, partner contributions, and management responsibilities. The agreement is to specify ratios for passing through partnership income, losses, deductions, and credits to the partners. information systems. IRS has requested funding in its fiscal year 2014 and 2015 budget proposals for an updated information system that would allow it to automate the linking process and collect more robust data but funding has not been approved. Large Partnerships May Pay Tax on Audit Adjustments Rather Than Pass Them Through to the Partners but Such Payments are Not Widely Used Current law allows large partnerships to pay a tax owed as determined by audit adjustments at the entity level rather than passing the adjustments through to partners, which would avoid all the costs of campus audits. This is allowed under both the Electing Large Partnership (ELP) audit procedures and under IRS procedures for closing audits with what is called a closing agreement. However, few large partnerships elect to become an ELP. Further, partnerships must voluntarily agree to use a closing agreement to pay a tax at the entity level and few do so. The Chairman of the House of Representatives Committee on Ways and Means and the Administration have also put forth proposals to address some of the challenges of auditing large partnerships. While the proposals differ somewhat and apply to partnerships with different numbers of partners, both would allow IRS to collect tax at the partnership level instead of having to pass it through to the taxable partners. For example, the Administration developed a legislative proposal to make the ELP audit procedures mandatory for partnerships with 1,000 or more direct and indirect partners, known as the Required Large Partnership proposal. IRS officials said that these legislative proposals, if passed, would significantly help address challenges involved with passing through audit adjustments. These proposals would involve tradeoffs and decisions about how to treat partners. For example, because the partners may be taxed at different tax rates, the single tax rate applied to the net audit adjustment at the partnership level may be different than the rate partners would pay if the adjustment was passed through. IRS Field Auditors Face Barriers Accessing Timely Support and Training to Address Complexities IRS focus group participants stated that they do not have the needed level of timely support from IRS counsel, TEFRA coordinators, and specialists. Focus group participants said that the support is critical because they had limited knowledge of the technical tax issues for partnerships and they may only work on a partnership audit once every few years. Further, focus group participants stated that it can take weeks or months to get needed input and planning audits is difficult because they do not know how long it will take to get the needed stakeholder input. Unexpected delays reduce the average18-month window of time for audit work. Focus group participants said that some IRS locations have only one TEFRA coordinator to answer questions about partnership audits. For example, IRS has one TEFRA coordinator to support all audits in New York State. IRS officials said that the number of TEFRA coordinators declined from 28 in fiscal year 2006 to 20 in May 2014 while workload increased. One IRS official said that TEFRA coordinators have to respond to requests for assistance and review and process TEFRA audits for partnerships of all sizes, not only large partnerships. IRS officials said that they plan to hire two coordinators in fiscal year 2014 and five in fiscal year 2015. IRS counsel officials told us that they believe that the number of delayed responses is fairly low, in contrast to what focus group participants said, and IRS counsel strives to process all requests for legal advice within 45 days of receipt. However, IRS counsel officials said that they do not track the number of requests for large partnership audits; if known, the response time would help IRS auditors to plan their audit work. Such a change would be in line with IRS’s strategic plan, which has a strategic objective on utilizing data to make timely, informed decisions. In addition to help from stakeholders, IRS focus groups participants told us that they have trouble accessing refresher courses on TEFRA and partnership tax issues. IRS officials told us that IRS field auditors may not understand that such training is available and can be taken with the permission of their supervisors. They also said that such training is not usually mandatory and some auditors may choose to not take it. Ensuring IRS auditors have access to training would be in line with IRS’s current strategic plan, which highlights the importance of building a talented workforce. IRS officials stated that they are developing and have implemented new tools and training to assist auditors with TEFRA rules and procedures. However, IRS’s Large Business and International division—the division responsible for auditing large partnership returns— experienced a 92 percent reduction in available training funds from 2009 to 2013.face training and relying on new virtual training courses on TEFRA and partnership audits in general. IRS has limited ability to directly address some of the challenges it faces auditing large partnerships. For example, IRS cannot make tiered partnerships less complex nor change the TEFRA audit procedures that are set in statute. Nevertheless, IRS has initiated three projects to improve its large partnership audit procedures. IRS has also begun an effort to better manage enterprise risk. Table 8 describes the three audit procedure projects and their potential for addressing certain challenges based on our interviews with IRS officials and reviews of available limited documents. The two field audit-related projects (the Large Partnership CMT and Large Partnership Procedures) started in 2013. IRS officials said that it could be a few years before enough audits are completed to know whether the two efforts worked as intended. The project to improve the campus linkage process (Just-In-Time Linkage pilot) started August 2014, but it is still under development and IRS provided limited documentation detailing the development of this effort. IRS has not developed the two field audit-related projects consistent with project planning principles, as shown in table 9. These two projects did not meet three of the five principles. For the Large Partnership CMT, while IRS officials said that they plan to collect data on five metrics, it is unclear how the metrics will be used to monitor progress because they are not specific to the CMT. Without following the principles in table 9, IRS may not be able to assess whether the projects succeeded in making large partnership audits more efficient and effective. IRS began an enterprise risk management (ERM) program agency wide in 2014. According to IRS documentation, IRS needed to evaluate how risks are identified, prioritized, evaluated, and mitigated across the agency. ERM objectives include deploying resources effectively, identifying and managing cross-enterprise risks, identifying the level of risk IRS is willing to absorb, aligning that risk level with strategies, and reducing operational surprises, among others. Large partnership audits and the challenges associated with them raise a number of the issues listed as ERM objectives. Because of the lack of information IRS currently collects and tracks about large partnership audits, it does not have a very clear picture of how effectively its audit resources are being utilized. Nevertheless, as large partnerships grow in number, IRS will have to make decisions about whether to reallocate audit resources away from other compliance work to conduct more large partnership audits. IRS has not yet determined how large partnerships will be incorporated into its ERM effort. Without such a determination and without documentation of the risks considered, IRS managers and external stakeholders, including Congress, may lack a record of how compliance risks associated with large partnerships were identified and prioritized. We have previously reported on the benefits of risk management and identified elements of a risk-management framework. Risk management is a strategy for helping program managers and stakeholders make decisions about assessing risk, allocating resources, and taking actions under conditions of uncertainty. We recognize that IRS’s information systems currently provide little data on large partnerships, as noted in other sections of this report. Even though IRS is not explicitly planning for how many large partnerships to audit each year, it is devoting resources to such audits. IRS is implicitly making decisions about how to allocate audit resources between large partnerships and other types of taxpayers. A determination about how large partnership compliance risks are to be identified and weighed against the compliance risks of other types of taxpayers would better inform those decisions. Large partnerships are a significant part of the economy and are increasing in number, size, and complexity. However, the relatively low rate at which IRS audits large partnerships and the minimal results achieved raise concerns about IRS’s ability to ensure the tax compliance of large partnerships. IRS has little data available to know why its audits are finding so little tax noncompliance. That is because IRS has not consistently defined “large partnership”—accounting for both the number of partners and amount of assets—and devised a related coding system to track any audit results. Without such data, IRS cannot conduct analysis to identify ways to better plan and use IRS resources in auditing large partnerships as well as analyze whether large partnerships present a high noncompliance risk. Without the data, testimonial evidence from IRS auditors indicates that they feel challenged to audit complicated partnerships for compliance without sufficient time and support. Existing audit procedures set in law and in IRS regulations add to the time pressures and constrain IRS auditors. Because so little is currently known about large partnership noncompliance, it would be premature to try to design overall, long-term solutions. An incremental approach could be based on what is currently known, including legislative changes that Congress should consider as well as actions that IRS should take. Legislative changes could help IRS auditors deal with the time constraints and reduce the resource demands of large partnership audits. Requiring large partnerships to designate a qualified TMP that the field auditors can contact is a relatively simple step that could reduce audit delays. Requiring large partnerships to pay any tax due at the entity level would also save resources but would not be a simple change. Such a change would have differing impacts on partners who may be in different tax brackets. The change would save the resources that are now devoted to the paper driven, labor intensive process of passing adjustments through to large numbers of partners. Paying taxes due on audit adjustments at the entity level has advantages over other options for audit efficiency gains such as automating the current paper driven process. Even if IRS was given resources to modernize its campus information systems, parts of the adjustment pass-through process would continue to require labor intensive reviews of partnership agreements. IRS can take several actions that would begin to provide better information about large partnership compliance and audit results, or that could lower audit costs. Actions that would be a first step towards better information for analyzing large partnership compliance include developing a consistent definition of a large partnership along with activity codes that could be used to track audit results. To lower audit costs and avoid wasting audit staff time, IRS could change its practice governing when to use its authority to select a TMP, and the regulation which established the number of days an audit can be open without triggering more costly closing procedures. IRS could also better ensure that limited calendar time is not wasted in an audit. It could do so by tracking delays in providing expert support, clarify when auditors can expect such support, and then use this information about the support that can realistically be provided to better plan the number and scope of new audits, so that the time allowed under the statute of limitations is more effectively used. While IRS has initiated three projects to improve its audit procedures, it has not followed project planning principles, including taking the steps needed to effectively track the results for the two projects that have been implemented. Without doing so, IRS will not know whether the projects succeeded in improving audit procedures. As large partnerships continue to grow in number and complexity, IRS will have to make strategic decisions about whether to reallocate scarce audit resources from other categories of taxpayers—perhaps from C corporations which are declining in number—to conduct more large partnership audits. IRS’s new Enterprise Risk Management program provides a venue for weighing the compliance risks associated with large partnerships against those of other types of taxpayers. Congress should consider altering the TEFRA audit procedures to: Require partnerships that have more than a certain number of direct and indirect partners to pay any tax owed as determined by audit adjustments at the partnership level. Require partnerships to designate a qualified TMP and, if that TMP is an entity, to also identify a representative who is an individual and for partnerships to keep the designation up to date. We recommend that the Commissioner of Internal Revenue take the following eight actions: Track the results of large partnerships audits: (a) define a large partnership based on asset size and number of partners; (b) revise the activity codes to align with the large partnership definition; and (c) separately account for field audits and campus audits. Analyze the audit results by these activity codes and types of audits to identify opportunities to better plan and use IRS resources in auditing large partnerships. Use existing authority to promptly designate the TMP under the largest profits interest rule or some other criterion. Extend the 45-day rule to give field audit teams more flexibility on when to withdraw an audit notice. Help field auditors for large partnership audits receive the support they request from counsel staff, TEFRA coordinators, and IRS specialists: (a) track the number of requests and time taken to respond; (b) clarify when responses to their requests should be expected; and (c) use the tracked and clarified information when planning the number and scope of large partnership audits. Clarify how and when field auditors can access refresher training on TEFRA audit procedures and partnership tax law. Develop and implement large partnership efforts in line with the five leading principles for project planning and track the results to identify whether the efforts worked as intended. Make and document a determination about how large partnerships are to be incorporated into the Enterprise Risk Management process. We provided a draft of this report for review and comment to the Commissioner of Internal Revenue. We received written comments dated September 8, 2014 from IRS’s Deputy Commissioner for Services and Enforcement (for the full text of the comments, see appendix IV). We also received technical comments from IRS, which we incorporated into the final report where appropriate. In its written comments, IRS agreed with our recommendations but said two of our recommendations, related to revising IRS’s activity codes to enable tracking large partnership audits and then analyzing audit results, are dependent upon future funding. We acknowledge in the report the resource constraints IRS currently faces. However, continuing to audit large partnerships with limited ability to track and analyze audit results will not help IRS make sound resource allocation decisions or improve audit effectiveness. We are sending copies of this report to the Secretary of the Treasury, the Commissioner of IRS, and interested congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this testimony, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. The names of GAO staff who made key contributions to this report are listed in appendix V. The objectives of this report are to (1) determine what IRS knows about the number and characteristics of large partnerships; (2) determine what IRS knows about the costs and results of audits of large partnership returns and assess IRS’s ability to effectively conduct such audits; and (3) identify and assess IRS’s efforts to address the challenges of auditing large partnership returns. To determine the number and characteristics of large partnerships, we obtained data on tax returns filed by large partnerships from the Enhanced Large Partnership Indicator (ELPI) file on partnerships for tax years 2002 to 2011, and on the number of partnerships with 100 or more direct and indirect partners and $100 million or more in assets. We merged these data with data obtained from the Business Returns Transaction File (BRTF). We analyzed and reported ELPI and BRTF data by the total number of partnerships by asset size, direct partner size, indirect partner size, industry group, and tiering depth. The ELPI data file captures information about the ownership structure of large partnerships. The ELPI file starts with a partnership and traces Schedule K-1 allocations through to the ultimate taxpayer. ELPI traces the ownership structure of a partnership as long as either the depth is less than 11 tiers or the ownership percentage is greater than 0.00001 percent and therefore can present a picture of the approximate number of direct and indirect partners in a partnership structure. The ELPI currently only has data on partnerships that file a 1065 and not those that file a 1065-B. For similar analysis on partnerships that file a 1065-B, see our prior work on large partnerships. To determine the number of IRS audits of large partnership returns and the characteristics of those audits, we obtained data from the Audit Information Management System (AIMS) and reported those partnership returns subject to IRS audit that were closed during fiscal years 2007 to 2013.returns, we only reported those audits that were traditional IRS field audits (in which IRS audited the books and records of a large partnership return), and not campus audits (in which IRS usually passed audit adjustments through to the related partners’ returns), as they are mainly an administrative function and do not include an examination of the books and records of the taxpayer return in question. We analyzed the results from these data consistent with how IRS measures audit results, such as the audit coverage rate (partnership returns subject to audit as a percentage of the total partnership return population) and no change rate (those audits that resulted in no change to the tax return from the audit), and, where possible, without suppressing data due to disclosure requirements, by asset size. We also analyzed the hours and days spent on large partnership audits to assess the costs of these audits. Where data were available, we compared these measures for large partnership return audits to those for corporate return audits of the same asset size. Once we identified the audited population of large partnership To assess the effectiveness of IRS audits of large partnership returns, we interviewed officials in the Office of Chief Counsel, Large Business and International division, Small Business and Self-Employed division, and Research, Analysis, and Statistics division. We also interviewed a number of external private sector lawyers who are knowledgeable about partnership tax law, reviewed academic research and literature, reviewed IRS documentation, such as IRS policies and procedures on partnership audits, and reviewed our recent reports on partnerships. We also completed six focus groups with 30 IRS team coordinators and managers on the challenges associated with completing audits of large partnership returns. These team coordinators and managers were selected for our focus groups because they supervised or worked on a large partnership return audit, based on our definition of large partnerships above, which was closed in calendar year 2013. Where available, we supplemented the challenges identified in these focus groups with supporting data and documentation to provide context or support those challenges identified. We performed a content analysis on the six focus groups, using NVivo software, to analyze and categorize the themes of the focus groups. The results of the focus group data are not generalizable to all IRS audits and do not necessarily represent the official viewpoint of IRS. Instead the results are used to identify themes in conjunction with the other data we collected. We compared information available to the intent of the In addition, we partnership audit procedures outlined in statute.compared information available on audits of large partnership returns to Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999). To identify and assess IRS efforts to address the challenges of audits of large partnership returns, we reviewed IRS documentation and interviewed IRS officials to identify any efforts and initiatives they have ongoing related to large partnership. We assessed IRS’s efforts and initiatives related to large partnerships using project planning criteria that our prior work identified work as leading practices. We identified these criteria by conducting a literature review of a number of guides on project management and business process reengineering. For the purposes of this review, we determined that the data used in our analysis were sufficiently reliable for our purposes and all dollar values have been adjusted for inflation to tax year or fiscal year 2014. Our data reliability assessment included reviewing relevant documentation, conducting interviews with knowledgeable IRS officials, and conducting electronic testing of the data to identify obvious errors or outliers. All Statistics of Income estimates in this report have 95 percent confidence intervals that are within +/- 10 percent of the point estimate, unless otherwise specified. Based on IRS documents and interviews with IRS officials, data in the ELPI file may be incomplete since this file is based on Schedule K-1 data. For example, some Schedule K-1s may be missing from the database because partnerships did not file Schedule K-1s, IRS errors, and timing problems. In general, the depth of tiering in ELPI for a partnership structure represents a minimum amount resulting in approximate entity counts because the missing data would add more entities that qualify for our definition of large partnerships. We conducted this performance audit from October 2013 to September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Legend: d = Value not shown to avoid disclosure of information about specific taxpayers. Legend: d = Value not shown to avoid disclosure of information about specific taxpayers. Legend: d = Value not shown to avoid disclosure of information about specific taxpayers. Legend: d = Value not shown to avoid disclosure of information about specific taxpayers. Calendar year 2012 partnership filings were not available at the time we completed our analysis to determine the audit coverage rate for fiscal year 2013. Legend: d = Value not shown to avoid disclosure of information about specific taxpayers. For the audit coverage rate calculation for fiscal year 2012, we combine correspondence audits (audits completed by mail) and field audits in the calculations for the highest two asset brackets as the IRS Data Book does not report the number of field audits separately to avoid disclosure of information about specific taxpayers. In addition to the contact listed above, Tom Short, Assistant Director; Vida Awumey; Sara Daleski; Deirdre Duffy; Robert Robinson; Cynthia Saunders; Erik Shive; Albert Sim; A.J. Stephens; and Jason Vassilicos made key contributions to this report.
More businesses are organizing as partnerships while fewer are C corporations. Unlike C corporations, partnerships do not pay income taxes but pass on income and losses to their partners. Large partnerships (those GAO defined as having $100 million or more in assets and 100 or more direct and indirect partners) are growing in number and have complex structures. Some partnerships create tiers of partnerships with hundreds of thousands of partners. Tiered large partnerships are challenging for IRS to audit because tracing income through the tiers to the ultimate partners is complex. GAO was asked to assess IRS's ability to audit large partnerships. GAO's objectives include: 1) determine what IRS knows about the number and characteristics of large partnerships, 2) assess IRS's ability to audit them, and 3) assess IRS's efforts to address the audit challenges. GAO analyzed IRS data from 2002 to 2011 and IRS audit documentation, interviewed IRS officials, met with IRS auditors in six focus groups, and interviewed private sector tax lawyers knowledgeable about partnerships. The number of large partnerships has more than tripled to 10,099 from tax year 2002 to 2011. Almost two-thirds of large partnerships had more than 1,000 direct and indirect partners, had six or more tiers and/or self reported being in the finance and insurance sector, with many being investment funds. The Internal Revenue Service (IRS) audits few large partnerships. Most audits resulted in no change to the partnership's return and the aggregate change was small. Although internal control standards call for information about effective resource use, IRS has not defined what constitutes a large partnership and does not have codes to track these audits. According to IRS auditors, the audit results may be due to challenges such as finding the sources of income within multiple tiers while meeting the administrative tasks required by the Tax Equity and Fiscal Responsibility Act of 1982 (TEFRA) within specified time frames. For example, IRS auditors said that it can sometimes take months to identify the partner that represents the partnership in the audit, reducing time available to conduct the audit. TEFRA does not require large partnerships to identify this partner on tax returns. Also under TEFRA, unless the partnership elects to be taxed at the entity level (which few do), IRS must pass audit adjustments through to the ultimate partners. IRS officials stated that the process of determining each partner's share of the adjustment is paper and labor intensive. When hundreds of partners' returns have to be adjusted, the costs involved limit the number of audits IRS can conduct. Adjusting the partnership return instead of the partners' returns would reduce these costs but, without legislative action, IRS's ability to do so is limited. Note: A 3-year statute of limitations governs the time IRS has to conduct partnership audits, which is about equally split between the time from when a return is received until the audit begins and the time to do the audit. IRS then has a year to assess the partners their portion of the audit adjustment. IRS has initiated three projects—one of which is under development—to make large partnership audit procedures more efficient, such as identifying higher risk returns to audit. However, the two projects implemented were not developed in line with project planning principles. For example, they do not have clear and measurable goals or a method for determining results. As a consequence, IRS may not be able to tell whether the projects succeed in increasing audit efficiency. Congress should consider requiring large partnerships to identify a partner to represent them during audits and to pay taxes on audit adjustments at the partnership level. IRS should take multiple actions, including: define large partnerships, track audit results using revised audit codes, and implement project planning principles for the audit procedure projects. IRS agreed with all the recommendations, but noted that revision of the audit codes is dependent upon future funding.
Dramatic increases in computer interconnectivity, especially in the use of the Internet, continue to revolutionize the way our government, our nation, and much of the world communicate and conduct business. However, this widespread interconnectivity also poses significant risks to our computer systems and, more important, to the critical operations and infrastructures they support, such as telecommunications, power distribution, public health, national defense (including the military’s warfighting capability), law enforcement, government, and emergency services. Likewise, the speed and accessibility that create the enormous benefits of the computer age, if not properly controlled, allow individuals and organizations to inexpensively eavesdrop on or interfere with these operations from remote locations for mischievous or malicious purposes, including fraud or sabotage. As greater amounts of money are transferred through computer systems, as more sensitive economic and commercial information is exchanged electronically, and as the nation’s defense and intelligence communities increasingly rely on commercially available information technology, the likelihood increases that information attacks will threaten vital national interests. Further, the events of September 11, 2001, underscored the need to protect America’s cyberspace against potentially disastrous cyber attacks—attacks that could also be coordinated to coincide with physical terrorist attacks to maximize the impact of both. Since September 1996, we have reported that poor information security is a widespread federal problem with potentially devastating consequences. Although agencies have taken steps to redesign and strengthen their information system security programs, our analyses of information security at major federal agencies have shown that federal systems were not being adequately protected from computer-based threats, even though these systems process, store, and transmit enormous amounts of sensitive data and are indispensable to many federal agency operations. In addition, in both 1998 and 2000, we analyzed audit results for 24 of the largest federal agencies and found that all 24 had significant information security weaknesses. As a result of these analyses, we have identified information security as a governmentwide high-risk issue in reports to the Congress since 1997—most recently in January 2001. These weaknesses continue as indicated by our most recent analyses for these 24 large federal agencies that considered the results of inspector general (IG) and GAO audit reports published from July 2000 through September 2001, including the results of the IGs’ independent evaluations of these agencies’ information security programs performed as required by GISRA. These analyses showed significant information security weaknesses in all major areas of the agencies’ general controls, that is, the policies, procedures, and technical controls that apply to all or a large segment of an entity’s information systems and help ensure their proper operation. Figure 1 illustrates the distribution of weaknesses across the 24 agencies for the following six general control areas: (1) security program management, which provides the framework for ensuring that risks are understood and that effective controls are selected and properly implemented; (2) access controls, which ensure that only authorized individuals can read, alter, or delete data; (3) software development and change controls, which ensure that only authorized software programs are implemented; (4) segregation of duties, which reduces the risk that one individual can independently perform inappropriate actions without detection; (5) operating systems controls, which protect sensitive programs that support multiple applications from tampering and misuse; and (6) service continuity, which ensures that computer-dependent operations experience no significant disruptions. Our analyses showed that weaknesses were most often identified for security program management and access controls. For security program management, we found weaknesses for all 24 agencies in 2001 as compared to 21 agencies (88 percent) in a similar analysis in 2000. For access controls, we also found weaknesses for all 24 agencies in 2001—the same condition we found in 2000. Concerned with accounts of attacks on commercial systems via the Internet and reports of significant weaknesses in federal computer systems that make them vulnerable to attack, on October 30, 2000, the Congress enacted GISRA, which became effective November 29, 2000, and is in effect for 2 years after this date. GISRA supplements information security requirements established in the Computer Security Act of 1987, the Paperwork Reduction Act of 1995, and the Clinger-Cohen Act of 1996 and is consistent with existing information security guidance issued by OMB and NIST, as well as audit and best practice guidance issued by GAO. Most importantly, however, GISRA consolidates these separate requirements and guidance into an overall framework for managing information security and establishes new annual review, independent evaluation, and reporting requirements to help ensure agency implementation and both OMB and congressional oversight. The law assigned specific responsibilities to OMB, agency heads and chief information officers (CIOs), and the IGs. OMB is responsible for establishing and overseeing policies, standards, and guidelines for information security. This includes the authority to approve agency information security programs, but delegates OMB’s responsibilities regarding national security systems to national security agencies. OMB is also required to submit an annual report to the Congress summarizing results of agencies’ evaluations of their information security programs. GISRA does not specify a date for this report. Each agency, including national security agencies, is to establish an agencywide risk-based information security program to be overseen by the agency CIO and ensure that information security is practiced throughout the life cycle of each agency system. Specifically, this program is to include periodic risk assessments that consider internal and external threats to the integrity, confidentiality, and availability of systems, and to data supporting critical operations and assets; the development and implementation of risk-based, cost-effective policies and procedures to provide security protections for information collected or maintained by or for the agency; training on security responsibilities for information security personnel and on security awareness for agency personnel; periodic management testing and evaluation of the effectiveness of policies, procedures, controls, and techniques; a process for identifying and remediating any significant deficiencies; procedures for detecting, reporting and responding to security incidents; an annual program review by agency program officials. In addition to the responsibilities listed above, GISRA requires each agency to have an annual independent evaluation of its information security program and practices, including control testing and compliance assessment. The evaluations of non–national-security systems are to be performed by the agency IG or an independent evaluator, and the results of these evaluations are to be reported to OMB. For the evaluation of national security systems, special provisions include designation of evaluators by national security agencies, restricted reporting of evaluation results, and an audit of the independent evaluation performed by the IG or an independent evaluator. For national security systems, only the results of each audit of an evaluation are to be reported to OMB. Finally, GISRA also assigns additional responsibilities for information security policies, standards, guidance, training, and other functions to other agencies. These agencies are NIST, the Department of Defense, the intelligence community, the Attorney General (Department of Justice), the General Services Administration, and the Office of Personnel Management. With GISRA expiring on November 29, 2002, H.R. 3844 proposes to permanently authorize information security legislation that essentially retains the same purposes as GISRA, as well as many of GISRA’s information security program, evaluation, and reporting requirements. It would also authorize funding to carry out its provisions for 5 years, thereby providing for periodic congressional oversight of the implementation and effectiveness of these requirements. We believe that continued authorization of information security legislation is essential to improving federal information security. As emphasized in our March 2002 testimony, the initial implementation of GISRA was a significant step for agencies, the administration, and the Congress in addressing the serious, pervasive weaknesses in the federal government’s information security. GISRA consolidated security requirements that existed in law and policy before GISRA and put into law the following important additional requirements, which are continued in H.R. 3844. First, GISRA requires agency program managers and CIOs to implement a risk-based security management program covering all operations and assets of the agency, including those provided or managed for the agency by others. Instituting such an approach is important since many agencies had not effectively evaluated their information security risks and implemented appropriate controls. Our studies of public and private best practices have shown that effective security program management requires implementing a process that provides for a cycle of risk management activities as now included in GISRA. Moreover, other efforts to improve agency information security will not be fully effective and lasting unless they are supported by a strong agencywide security management program. Second, GISRA requires an annual independent evaluation of each agency’s information security program. Individually, as well as collectively, these evaluations can provide much needed information for improved oversight by OMB and the Congress. Our years of auditing agency security programs have shown that independent tests and evaluations are essential to verifying the effectiveness of computer-based controls. Audits can also evaluate an agency’s implementation of management initiatives, thus promoting management accountability. Annual independent evaluations of agency information security programs will help drive reform because they will spotlight both the obstacles and progress toward improving information security and provide a means of measuring progress, much like the financial statement audits required by the Government Management Reform Act of 1994. Further, independent reviews proved to be an important mechanism for monitoring progress and uncovering problems that needed attention in the federal government’s efforts to meet the Year 2000 computing challenge. Third, GISRA takes a governmentwide approach to information security by accommodating a wide range of information security needs and applying requirements to all agencies, including those engaged in national security. This is important because the information security needs of civilian agency operations and those of national security operations have converged in recent years. In the past, when sensitive information was more likely to be maintained on paper or in stand-alone computers, the main concern was data confidentiality, especially as it pertained to classified national security data. Now, virtually all agencies rely on interconnected computers to maintain information and carry out operations that are essential to their missions. While the confidentiality needs of these data vary, all agencies must be concerned about the integrity and the availability of their systems and data. It is important for all agencies to understand these various types of risks and take appropriate steps to manage them. Fourth, the annual reporting requirements provide a means for both OMB and the Congress to oversee the effectiveness of agency and governmentwide information security, measure progress in improving information security, and consider information security in budget deliberations. In addition to management reviews, annual IG reporting of the independent evaluation results to OMB and OMB’s reporting of these results to the Congress provide an assessment of agencies’ information security programs on which to base oversight and budgeting activities. Such oversight is essential for holding agencies accountable for their performance, as was demonstrated by the OMB and congressional efforts to oversee the Year 2000 computer challenge. This reporting also facilitates a process to help ensure consistent identification of information security weaknesses by both the IG and agency management. The first-year implementation of GISRA also yielded significant benefits in terms of agency focus on information security. A number of agencies stated that as a result of implementing GISRA, they are taking significant steps to improve their information security programs. For example, one agency stated that the law provided it with the opportunity to identify some systemic program-level weaknesses for which it plans to undertake separate initiatives targeted specifically to improve the weaknesses. Other benefits agencies observed included (1) higher visibility of information security within the agencies, (2) increased awareness of information security requirements among department personnel, (3) recognition that program managers are to be held accountable for the information security of their operations, (4) greater agency consideration of security throughout the system life cycle, and (5) justification for additional resources and funding needed to improve security. Agency IGs also viewed GISRA as a positive step toward improving information security particularly by increasing agency management’s focus on this issue. Implementation of GISRA has also resulted in important actions by the administration which, if properly carried out, should continue to improve information security in the federal government. For example, OMB has issued guidance that information technology investments will not be funded unless security is incorporated into and funded as part of each investment, and NIST has established a Computer Security Expert Assist Team to review agencies’ computer security management. The administration also has plans to direct large agencies to undertake a review to identify and prioritize critical assets within the agencies and to identify their interrelationships with other agencies and the private sector; conduct a cross-government review to ensure that all critical government processes and assets have been identified; integrate security into the President’s Management Agenda Scorecard; develop workable measures of performance; develop electronic training on mandatory topics, including security; and explore methods to disseminate vulnerability patches to agencies more effectively. Such benefits and planned actions demonstrate the importance of GISRA’s requirements and the significant impact they have had on information security in the federal government. H.R. 3844 proposes a number of changes and clarifications that we believe could strengthen information security requirements, some of which address issues noted in the first-year implementation of GISRA. Currently, agencies have wide discretion in deciding what computer security controls to implement and the level of rigor with which to enforce these controls. In theory, some discretion is appropriate since, as OMB and NIST guidance state, the level of protection that agencies provide should be commensurate with the risk to agency operations and assets. In essence, one set of specific controls will not be appropriate for all types of systems and data. Nevertheless, our studies of best practices at leading organizations have shown that more specific guidance is important. In particular, specific mandatory standards for specified risk levels can clarify expectations for information protection, including audit criteria; provide a standard framework for assessing information security risk; help ensure that shared data are appropriately and consistently protected; and reduce demands for already limited agency information security resources to independently develop security controls. In response to this need, H.R. 3844 includes a number of provisions that would require the development, promulgation, and compliance with minimum mandatory management controls for securing information and information systems to manage risks as determined by agencies. Specifically, NIST, in coordination with OMB, would be required to develop (1) standards and guidelines for categorizing the criticality and sensitivity of agency information according to the control objectives of information integrity, confidentiality, and availability, and a range of risk levels, and (2) minimum information security requirements for each information category. OMB would issue standards and guidelines based on the NIST-developed information and would require agencies to comply with them. This increases OMB’s information security authority, given that the secretary of commerce is currently required by the Computer Security Act to issue such standards. These standards would include (1) minimum mandatory requirements and (2) standards otherwise considered necessary for information security. Agencies may use more stringent standards than provided by NIST, but H.R. 3844 would require building more stringent protections on top of minimum requirements depending on the nature of information security risks. Waiver of the standards is not permitted—they are intended to provide a consistent information security approach across all agencies, while meeting the mission-specific needs of each agency. Thus, agencies would be required to categorize their information and information systems according to control objectives and risk levels and to meet the minimum information security requirements. H.R. 3844 seeks to improve accountability and congressional oversight by clarifying agency reporting requirements and ensuring that the Congress and GAO have access to information security evaluation results. In particular, it requires agencies to submit an annual report to both OMB and the comptroller general. This reporting requirement is in addition to the requirement in both GISRA and H.R. 3844 that IGs report the results of independent evaluations to OMB and would help to ensure that the Congress receives the information it needs for oversight of federal information security and related budget deliberations. However, to ensure that agencies provide consistent and meaningful information in their reports, it would be important that any such reporting requirement consider specifying what these reports should address. As reported in our March 2002 testimony, during first-year implementation of GISRA, OMB informed the agencies that it considered GISRA material the CIOs prepared for OMB to be predecisional and not releasable to the public, the Congress, or GAO. OMB also considered agencies’ corrective action plans to contain predecisional budget information and would not authorize agencies to release them to us. Later, OMB did authorize the agencies to provide copies of their executive summaries, and through continued negotiations with OMB since our March testimony, many agencies are now providing us with the more detailed information that they submitted to OMB. We are continuing to work with OMB to obtain appropriate information from agencies’ first-year GISRA corrective action plans and to develop a process whereby this information can be routinely provided to the Congress in the future. The Congress should have consistent and timely information for overseeing agencies’ efforts to implement information security requirements and take corrective actions, as well as for budget deliberations. In our report being released today, we recommend that OMB authorize the heads of federal departments and agencies to release information from their corrective action plans to the Congress and GAO that would (1) identify specific weaknesses to be addressed, their relative priority, the actions to be taken, and the timeframes for completing these actions and (2) provide their quarterly updates on the status of completing these actions. In commenting on our recommendation, OMB stated that it recognizes Congress’s oversight role regarding agencies’ actions to correct information security weaknesses and is continuing to develop a solution for next year’s reporting to provide to the Congress information on agencies’ corrective actions. However, OMB believed that removing predecisional information from current year plans would be difficult and is not having the agencies prepare information on their current plans that would be releasable to the Congress. One way to help ensure that the Congress receives such information would be to specifically require that agencies report it to the Congress and GAO. In our March 2002 testimony, we reported that we were unable to obtain complete information on GISRA implementation for national security systems. Specifically, OMB did not summarize the overall results of the audits of the evaluations for national security systems in its report to the Congress, and the director of central intelligence declined to provide information for our review. In this regard, our report being released today includes a recommendation that OMB provide the Congress with appropriate summary information on the results of the audits of the evaluations for information security programs for national security systems. While we were unable to evaluate this aspect of GISRA implementation, H.R. 3844 proposes to modify GISRA in a number of ways to clarify the treatment of national security systems and to simplify statutory requirements while maintaining protection for the unique requirements of such systems within the risk management approach of the law. First, the bill replaces GISRA’s use of the term “mission critical system.” Instead, H.R. 3844 uses the traditional term “national security system,” maintaining the longstanding statutory treatment of military and intelligence mission-related systems and classified systems. It would also eliminate a separate category of systems included in GISRA’s definition of mission critical system—debilitating impact systems—that broadened the exemption from GISRA for these systems. Second, consistent with the traditional definitions of national security systems, H.R. 3844 provides more straightforward distinctions between national security and non–national-security systems. This simplifies the law and could simplify compliance for agencies operating national security systems. The bill, for example, replaces GISRA’s delegation of policy and oversight responsibilities for national security systems from OMB to national security agencies by simply continuing longstanding limitations on OMB and NIST authority over national security systems. Third, H.R. 3844 makes a number of changes to GISRA to streamline agency evaluation requirements that affect national security systems: The bill clarifies procedures for evaluating national security systems within the context of agencywide evaluations. The results of the evaluations of national security systems, not the evaluations themselves, are to be submitted to OMB, which will then prepare a summary report for the Congress. As in GISRA, the actual evaluations and any descriptions of intelligence-related national security systems are to be made available to the Congress only through the intelligence committees. The requirement for an audit of the evaluation of national security systems is eliminated. Instead, agencies are required to provide appropriate protections for national security information and, as discussed above, submit only the results of the evaluations to OMB. We agree that these changes provide a more traditional definition of national security systems, and that such systems should be appropriately considered within the context of a comprehensive evaluation of agency information security. We also believe that requirements for reporting evaluation results to OMB and for OMB to prepare a summary report for the Congress would provide information needed for congressional oversight. This reporting requirement is consistent with our recommendation contained in the report that we are issuing today: that OMB provide the Congress with appropriate summary information on evaluation results for national security systems. A number of provisions in the proposed legislation establish additional requirements for federal agencies that we believe would strengthen implementation and management of their information security programs. Some of the more significant requirements are as follows: Agencies would be required to comply with all standards applicable to their systems, including the proposed mandatory minimum control requirements and those for national security systems. Thus, in implementing an agencywide risk-management approach to information security, agencies with both national security and non–national-security systems would need to have an agencywide information security program that can address the security needs and standards for both kinds of systems. Under the bill, the requirement for designating a senior agency information security officer is more detailed than that under GISRA. This official is to (1) carry out the CIO’s responsibilities under the act; (2) possess appropriate professional qualifications; (3) have information security as his or her primary duty; and (4) head an information security office with the mission and resources needed to help ensure agency compliance with the act. H.R. 3844 also requires each agency to document its agencywide security program and prepare subordinate plans as needed for networks, facilities, and systems. GISRA uses both the terms “security program” and “security plan” and does not specifically require that the program be documented. Our guidance for auditing information system controls states that entities should have a written plan that clearly describes the entity’s security program and policies and procedures that support it. H.R. 3844 stresses the importance of agencies having plans and procedures to ensure the continuity of operations for information systems that support the operations and assets of the agency. Such plans, procedures, and other service continuity controls are important because they help ensure that when unexpected events occur, critical operations will continue without undue interruption and that crucial, sensitive data are protected. Losing the capability to process, retrieve, and protect electronically maintained information can significantly affect an agency’s ability to accomplish its mission. If service continuity controls are inadequate, even relatively minor interruptions can result in lost or incorrectly processed data, which can cause financial losses, expensive recovery efforts, and inaccurate or incomplete information. For some operations, such as those involving health care or safety, system interruptions could even result in injuries or loss of life. GAO and IG audit work indicate that most of the 24 large agencies we reviewed had weaknesses in service continuity controls, such as plans that were incomplete or not fully tested. H.R. 3844 maintains NIST’s standards development mission for information systems, federal information systems, and federal information security (except for national security and classified systems), but updates the mission of NIST. Some of H.R. 3844’s more significant changes to NIST’s role and responsibilities would require NIST to: develop mandatory minimum information security requirements and guidance for detecting and handling of information security incidents and for identifying an information system as a national security system; establish a NIST Office for Information Security Programs to be headed by a senior executive level director; and report annually to OMB to create a more active role for NIST in governmentwide information security oversight and to help ensure that OMB receives regular updates on the state of federal information security. In addition, H.R. 3844 would revise the National Institute of Standards and Technology Act to rename NIST’s Computer System Security and Privacy Advisory Board as the Information Security Advisory Board and to ensure that this board has sufficient independence and resources to consider information security issues and provide useful advice to NIST. The bill would strengthen the role of the board by (1) mandating that it provide advice not only to NIST in developing standards, but also to OMB who promulgates such standards; (2) requiring that it prepare an annual report; and (3) authorizing it to hold its meetings where and when it chooses. Our analysis of H.R. 3844 identified other proposed changes and requirements that could enhance federal information security, as well as help improve compliance by clarifying inconsistent and unclear terms and provisions, streamlining a number of GISRA requirements, and repealing duplicative provisions in the Computer Security Act and the Paperwork Reduction Act. These changes include the following: Information security: H.R. 3844 would create a definition for the term “information security” to address three widely accepted objectives— integrity, confidentiality, and availability. Including these objectives in statute highlights that information security involves not only protecting information from disclosure (confidentiality), but also protecting the ability to use and rely on information (availability and integrity). Information technology: H.R. 3844 would retain GISRA’s use of the Clinger-Cohen Act definition of “information technology.” However, H.R. 3844 clarifies the scope of this term by using consistent references to “information systems used or operated by any agency or by a contractor of an agency or other organization on behalf of an agency.” This emphasizes that H.R. 3844 is intended to cover all systems used by or on behalf of agencies, not just those operated by agency personnel. As discussed previously, both OMB’s and GAO’s analyses of agencies’ first-year GISRA reporting showed significant weaknesses in information security management of contractor-provided or -operated systems. Independent evaluations: The legislation would continue the GISRA requirement for an annual independent evaluation of each agency’s information security program and practices. However, several language changes are proposed to clarify this requirement. For example, the word “representative” would be substituted for “appropriate” in the requirement that the evaluation involve the examination of a sample of systems or procedures. In addition, the bill would also require that the evaluations be performed in accordance with generally accepted government auditing standards, and that GAO periodically evaluate agency information security policies and practices. We agree with these proposed changes to independent evaluations, but as noted in our March 2002 testimony, these evaluations and expanded coverage for all agency systems under GISRA and H.R. 3844 place a significant burden on existing audit capabilities and require ensuring that agency IGs have necessary resources to either perform or contract for the needed work. Federal information security incident center: The bill would direct OMB to oversee the establishment of a central federal information security incident center and expands GISRA references to this function. While not specifying which federal agency should operate this center, H.R. 3844 specifies that the center would provide timely technical assistance to agencies and other operators of compile and analyze information security incident information; consult with national security agencies and other appropriate agencies, inform agencies about information security threats and vulnerabilities; and such as an infrastructure protection office. H.R. 3844 would also require that agencies with national security systems share information security information with the center to the extent consistent with standards and guidelines for national security systems. This provision should encourage interagency communication and consultation, while preserving the discretion of national security agencies to determine appropriate information sharing. Technical and conforming amendments: In addition to its substantive provisions, H.R. 3844 would make a number of minor changes to GISRA and other statutes to ensure consistency within and across these laws. These changes include the elimination of certain provisions in the Paperwork Reduction Act and the Computer Security Act that are replaced by the requirements of GISRA and H.R. 3844. As discussed previously, GISRA established important program, evaluation, and reporting requirements for information security; and the first-year implementation of GISRA has resulted in a number of important administration actions and significant agency benefits. In addition, H.R. 3844 would continue and strengthen these requirements to further improve federal information security. However, even with these and other information security-related improvement efforts undertaken in the past few years—such as the president’s creation of the Office of Homeland Security and the President’s Critical Infrastructure Protection Board— challenges remain. Given the events of September 11, and reports that critical operations and assets continue to be highly vulnerable to computer-based attacks, the government still faces a challenge in ensuring that risks from cyber threats are appropriately addressed in the context of the broader array of risks to the nation’s welfare. Accordingly, it is important that federal information security efforts be guided by a comprehensive strategy for improvement. In 1998, shortly after the initial issuance of Presidential Decision Directive (PDD) 63 on protecting the nation’s critical infrastructure, we recommended that OMB, which, by law, is responsible for overseeing federal information security, and the assistant to the president for national security affairs work together to ensure that the roles of new and existing federal efforts were coordinated under a comprehensive strategy. Our later reviews of the National Infrastructure Protection Center and of broader federal efforts to counter computer-based attacks showed that there was a continuing need to clarify responsibilities and critical infrastructure protection objectives. As I emphasized in my March 2002 testimony, as the administration refines the strategy that it has begun to lay out in recent months, it is imperative that it take steps to ensure that information security receives appropriate attention and resources and that known deficiencies are addressed. These steps would include the following: It is important that the federal strategy delineate the roles and responsibilities of the numerous entities involved in federal information security and related aspects of critical infrastructure protection. Under current law, OMB is responsible for overseeing and coordinating federal agency security, and NIST, with assistance from the National Security Agency, is responsible for establishing related standards. In addition, interagency bodies—such as the CIO Council and the entities created under PDD 63 on critical infrastructure protection—are attempting to coordinate agency initiatives. Although these organizations have developed fundamentally sound policies and guidance and have undertaken potentially useful initiatives, effective improvements are not yet taking place. Further, it is unclear how the activities of these many organizations interrelate, who should be held accountable for their success or failure, and whether they will effectively and efficiently support national goals. Ensuring effective implementation of agency information security and critical infrastructure protection plans will require active monitoring by the agencies to determine if milestones are being met and testing to determine if policies and controls are operating as intended. Routine periodic audits, such as those required by GISRA and H.R. 3844, could allow for more meaningful performance measurement. In addition, the annual evaluation, reporting, and monitoring process established through these provisions, is an important mechanism, previously missing, to hold agencies accountable for implementing effective security and to manage the problem from a governmentwide perspective. Agencies must have the technical expertise they need to select, implement, and maintain controls that protect their information systems. Similarly, the federal government must maximize the value of its technical staff by sharing expertise and information. Highlighted during the Year 2000 challenge, the availability of adequate technical and audit expertise is a continuing concern to agencies. Agencies can allocate resources sufficient to support their information security and infrastructure protection activities. Funding for security is already embedded to some extent in agency budgets for computer system development efforts and routine network and system management and maintenance. However, some additional amounts are likely to be needed to address specific weaknesses and new tasks. OMB and congressional oversight of future spending on information security will be important to ensuring that agencies are not using the funds they receive to continue ad hoc, piecemeal security fixes that are not supported by a strong agency risk management process. Expanded research is needed in the area of information systems protection. While a number of research efforts are underway, experts have noted that more is needed to achieve significant advances. As the director of the CERT® Coordination Center testified before this subcommittee last September, “It is essential to seek fundamental technological solutions and to seek proactive, preventive approaches, not just reactive, curative approaches.” In addition, in its December 2001 third annual report, the Advisory Panel to Assess Domestic Response Capabilities for Terrorism Involving Weapons of Mass Destruction (also known as the Gilmore Commission) recommended that the Office of Homeland Security develop and implement a comprehensive plan for research, development, test, and evaluation to enhance cyber security.
The Federal Information Security Management Act of 2002 reauthorizes and expands the information security, evaluation, and reporting requirements enacted in the National Defense Authorization Act for Fiscal Year 2001. Concerned that pervasive information security weaknesses place federal operations at significant risk of disruption, tampering, fraud, and inappropriate disclosures of sensitive information, Congress enacted the Government Security Reform Act (GISRA) for more effective oversight. The Federal Information Security Management Act also changes and clarifies information security issues noted in the first-year implementation of GISRA. In particular, the bill requires the development, promulgation of, and compliance with minimum mandatory management controls for securing information and information systems; requires annual agency reporting to both the Office of Management and Budget and the Comptroller General; and defines the evaluation responsibilities for national security systems. To ensure that information security receives appropriate attention and resources and that known deficiencies are addressed, it will be necessary to delineate the roles and responsibilities of the numerous entities involved; obtain adequate technical expertise to select, implement, and maintain controls; and allocate enough agency resources for information security.
The United States shares nearly 4,000 miles of border with Canada stretching from the Pacific to the Atlantic coasts, and the U.S.-Canadian border is considered to be the world’s longest open border between two nations. There is a great deal of trade and travel across this border, and approximately 90 percent of Canada’s population lives within 100 miles of the U.S. border. While legal trade is predominant, DHS reports networks of illicit criminal activity and smuggling of drugs, currency, people, and weapons between the two countries. Annually, CBP reports making approximately 4,000 arrests and interdicts approximately 40,000 pounds of illegal drugs at and between the northern border ports of entry. Historically, these numbers have been significantly lower than those of the southwest border; however, DHS reports that the terrorist threat on the northern border is higher, given the large expanse of area with limited law enforcement coverage. DHS agencies are charged with protecting the nation and its citizens from threats of terrorism, as shown in table 1. CBP is the lead federal agency in charge of securing our nation’s borders, and has three components with a mission to interdict illegal contraband and persons seeking to enter illegally at and between the land ports of entry. Two other DHS agencies, ICE and USCG, also have key roles. The ICE mission includes investigating and dismantling criminal organizations that transport persons and goods across the border illegally, while USCG executes its maritime security mission by providing patrol presence and operational response for all navigable waterways on the northern border, including the Great Lakes. DHS agencies leverage their border security efforts through partnerships with state, local, tribal, and Canadian law enforcement agencies to share intelligence, information, and conduct joint operations for interdiction and investigation of cross-border crime. DHS considers these collaborative efforts particularly important for the northern border in remote, sparsely populated areas. There has been growing concern within Congress over the number of personnel assigned to the northern border, the increasing amount of illegal activity, and the potential for terrorists to gain unlawful entry into the United States. There has also been concern with respect to the adequacy of facilities and physical infrastructure to accommodate the increasing volume of traffic. Congress has shown increasing interest in issues surrounding security of the northern border—first authorizing, and later directing—resource allocations to the northern border for personnel and improved technology. Congress has also established various reporting requirements in laws that are to provide updates on the status of northern border security. In addition to the 9/11 Act, for example, the Consolidated Appropriations Act, 2008, directs DHS to prepare and submit a biennial National Land Border Security Plan. This plan is to include a vulnerability, risk, and threat assessment of each port of entry located on the northern border or the southern border, beginning in January 2009. The DHS report—issued to Congress on February 29, 2008—was overseen and facilitated by CBP’s Office of the Executive Secretariat (OES). OES was formed in August 2007 to assign responsibilities for and coordinate the development of all CBP congressional reports, correspondence, and external requests for information. OES tasked the CBP Office for Secure Border Initiative with taking the lead in coordinating information gathering from the relevant CBP components. OES also received input from ICE and USCG in formulating the report. The information from these sources was compiled and reviewed within CBP, DHS, and the Office of Management and Budget (OMB) before submission to Congress. While the DHS report to Congress discusses northern border vulnerabilities and ongoing initiatives in place as required in law, information is missing that identifies the extent that various vulnerabilities remain unaddressed, and recommendations and resources to address these security gaps. Without this information, it is difficult for Congress to consider future actions and resources needed on the northern border in the broader context of national security. The DHS report to Congress discusses northern border vulnerabilities and ongoing initiatives to improve northern border security consistent with the content of its planning, performance, and budget documents, but DHS does not link this information to show the extent that security gaps remain on the northern border. The DHS report states that the northern border is vulnerable to the primary threats of terrorism, drug trafficking, and illegal immigration. These facts were consistently supported by threat information obtained from Canadian officials, and officials from DHS and CBP. According to these sources, northern border vulnerabilities are most actively exploited to smuggle illegal drugs and contraband; illegal immigration is a lesser problem. While DHS reports significant concern that terrorists can enter the United States undetected at or between the northern ports of entry, U.S. and Canadian officials agree that there is currently no credible intelligence or evidence indicating that there are terrorists in Canada planning an attack on U.S. soil. The DHS report lists initiatives its component agencies have underway to address vulnerabilities and achieve operational control of the border but does not mention progress made in this regard, or how many border miles are under operational control. CBP reports on these indicators—border miles under effective (or operational) control, and border miles with increased situational awareness—as two of its key performance measures and reports that it plans to increase and achieve control of the northern border by deploying a proper mix of personnel, technology, facilities, and partnerships at and between the ports of entry. While the DHS report lists initiatives in each of these areas, they are not linked to the reported vulnerabilities, and the extent that these initiatives mitigate or eliminate vulnerabilities at and between the ports of entry is not mentioned. Also not mentioned in the report is the timeline DHS is using to request and deploy resources necessary to increase the levels of control of the northern border. The absence of such information makes it difficult for Congress to consider future action and resources needed on the northern border in the context of other areas of national security. In terms of personnel, DHS lists ongoing initiatives for adequately staffing the northern ports of entry, and hiring initiatives to increase staffing between the ports of entry by 2010. For the ports of entry, DHS describes its implementation of a workload staffing model that considers workload and processing times to help identify the number of personnel that should be deployed at each location, which has resulted in the deployment of 190 CBP officers. Between the ports of entry, DHS does not provide its methodology for identifying adequate staffing, but does describe initiatives to more than double the number of border patrol agents from fiscal years 2007 to 2010, in response to direction from Congress. It is unclear, however, to what extent these staffing initiatives will result in obtaining effective control of the border. For example, the report states that 190 CBP officers have been deployed to ports of entry as indicated in part by the workload staffing model; however, DHS reports in its strategic plan for fiscal years 2008-2013 that additional CBP officers are needed at many ports of entry. Similarly, while DHS reports a commitment to meet statutory staffing goals between the ports of entry by the year 2010, OBP officials indicated that a greater number of agents would be needed to gain operational control of the northern border. In discussing technology initiatives, CBP reports that technology has been employed at the northern ports of entry to address a number of vulnerabilities, but between the ports of entry, discussion is focused on pilot projects intended to test capabilities for potential use on the northern border. At the ports of entry, CBP reports that much technology is in place to address vulnerabilities related to the transport of illegal radiological and nuclear materials, illegal contraband, and misrepresentation of identity through the use of fraudulent documents. Between the ports of entry, pilot projects address vulnerabilities related to the inability to detect low-flying aircraft; the inability to detect unauthorized border crossings in areas without law enforcement patrol; and to share communications. The report does not discuss when the results of the projects will be available or the extent that DHS would use these technologies, if successful, to address existing vulnerabilities. DHS also does not discuss initiatives to address the vulnerabilities cited in the report related to maritime security, such as the lack of video capabilities in marinas, unregulated access that small private vessels have on the Great Lakes and other border waterways, and insufficient resources to access boats on the open water. Subsequent to the report issued to Congress, DHS provided a thorough discussion of the vulnerabilities and challenges in addressing these aspects of maritime security in the DHS Small Vessel Security Strategy, issued April 2008. USCG stated that an implementation plan would be finalized at the end of December 2008, to guide agency actions in implementing the strategy, but this plan would not be released to the public due to its security classification. In discussing facilities, the DHS report describes ongoing initiatives to systematically review the port of entry inspection facilities to identify the need for upgrade or replacement, and to develop a new standard station concept to accommodate the growth in number of border agents between the ports of entry. The report describes the age and condition of some facilities and volume of traffic and use. However, while the DHS strategic plan states that the department’s secure border program depends significantly on modernizing the ports of entry, there is no discussion in the report to Congress on the status of these efforts, when they will be completed, and how they currently affect northern border security. The DHS report lists various initiatives underway that establish binational partnerships or partnerships among U.S. federal, state, and local agencies to share information and improve communication and cooperation among agencies working along the border. Five binational partnerships were discussed, four with a broad focus on cross-border law enforcement efforts, and one with a specific focus on preventing illegal air incursions. Six U.S. partnerships were also mentioned, one specific to smuggling on Indian reservations, two related to drug trafficking, two related to intelligence gathering, and one related to augmenting enforcement capacity by cross-designating federal authority to other agencies. DHS and CBP management documents support the report’s discussion of these partnerships as a key strategy for northern border security; however, there is no discussion of the extent that these partnerships were responsible for increasing the level of control across the border or how they will do so in the future. The DHS report contains a section for recommendations to address northern border vulnerabilities, but the information provided is a restatement of initiatives in place without mention of recommendations for further action or additional resources as required by law. Officials from DHS component agencies provided several reasons this information was missing from the report. One reason was that the Secure Border Initiative (SBI) office—which was tasked with coordinating component agency contributions to the report—directed them to discuss their resource needs in terms of the existing budget; therefore, they did not discuss actions or resource requirements for future years. A second reason was that some components were satisfied with their current budget allocation. CBP officials stated that they supported the President’s budget and had nothing further to recommend or request in the report to Congress. A third reason is that some components did not have the information necessary to identify recommendations or additional resources. USCG officials indicated that the lack of departmentwide strategic direction for the northern border has made it difficult to identify specific resource needs. Similarly, ICE officials said that information was lacking to compare and assess overall resources devoted across various northern border agencies, initiatives, and border locations. DHS is developing strategic plans, a risk-management process, and new initiatives that could change the level and mix of resources needed to protect the northern border; however, most efforts were incomplete and unavailable for our review. Over the years, we have conducted evaluations of various border security activities and our reports included a number of recommendations for improvement. DHS action to fully implement these recommendations would provide benefit in addressing northern border vulnerabilities. DHS and CBP have reported the need to provide a coherent framework to coordinate federal, state, local, and tribal northern border security efforts, and are developing northern border strategic plans, as well as a risk- management process to further these goals. Completion of these efforts should provide DHS with useful information in developing future reports to Congress on northern border security. DHS has completed, or begun efforts to develop, three strategic plans that will help address vulnerabilities on the northern border. Strategic plans help ensure that missions requiring a multiagency response are firmly aligned with articulated goals and objectives, and help keep agencies focused on the desired “end state.” While DHS has developed a broad strategic plan to outline the department’s overall mission and objectives, it has begun to focus on the need to develop coordinated and unified strategies to address more specific concerns, such as northern border security. A key effort under development is an overall northern border strategic plan that will, for the first time, take all DHS component agencies into account in efforts to address vulnerabilities necessary for control of the northern border. CBP did not indicate when this plan may be completed. A second strategic plan under development will address security vulnerabilities in the air environment, such as the inability to detect low-flying aircraft. CBP officials stated that they were working on performance measures for this plan, and estimate that it will be released in April 2009. DHS issued its third strategic plan, the Small Vessel Security Strategy, in April 2008 to help close existing maritime security gaps on waterways such as the Great Lakes, related to the small vessel environment. To help component agencies achieve the major goals outlined in each of these strategic plans, DHS plans to develop implementation plans that are to describe specific actions component agencies will take in support of each objective, identify lead component agencies for these actions, and provide target completion dates. USCG has stated that the implementation plan for the small vessel strategic plan is scheduled to be issued for use by component agencies at the end of December 2009, but will be considered security sensitive. Dates are not yet available for implementation plans to follow the remaining two strategic plans. Some DHS component agencies have acted to incorporate risk- management principles that provide information to prioritize and allocate resources for their individual programs and activities as required by law and presidential directive, but DHS has not yet completed efforts to implement this approach departmentwide. Risk management is important for strengthening homeland security resource allocations, as the nation cannot afford to fully protect against every type of threat. Therefore, an approach is needed that considers how best to allocate resources based on factors such as probability and adverse consequence. The DHS goal is to develop a risk-management process that will assess risk and inform strategic planning, programming, budgeting, and execution processes across all of its component agencies and that will evaluate the risk- reduction effects among relevant DHS programs. However, achieving this goal has been difficult. While risk management has been used in the private and public sectors for decades, its application for homeland security and combating terrorism is relatively new and without a precedent framework. As such, the effort to assess risk across DHS component agencies and programs is still in its very early stages of development. DHS component agencies have identified resources to increase and achieve northern border security, but the need for these resources constantly evolves in response to various factors. For example, the DHS report described many pilot projects for new technology. If successful, OBP officials report that these projects could reduce security vulnerabilities and current needs for other resources, such as existing technology, personnel, or infrastructure. However, these officials also indicate that new technology must be fully tested for operational effectiveness, and delays coupled with uncertainties of success have made it difficult to balance future resource investments in new technology with current investments in existing technology. Such balance is necessary to ensure security as well as effective stewardship of taxpayer dollars. Similarly, DHS officials discussed partnerships among federal, state, and local agencies to coordinate information and operations—either newly created or still in development—that could result in greater efficiencies in border security. However, time will tell if these partnerships are sustainable and warrant a decrease or change in current estimated needs for personnel. External factors—such as the interplay among private parties, governments, and agencies—also influence actions in addressing security vulnerabilities. In Detroit, for example, CBP officials said that action to improve facilities at northern ports of entry was stymied by private ownership of property and landlocked facilities. In addition, the Small Vessel Security Strategy indicates that efforts to address maritime vulnerabilities were challenged by different practices or views among federal, state, and Canadian governments in balancing security needs with the freedom of the waterways expected by the small-vessel community. Further, ICE officials said that the scope of their authority in pursuing narcotics investigations influenced their actions in addressing some cross- border crimes. DHS has an opportunity to address some northern border vulnerabilities by fully implementing recommendations made in past evaluations of its security efforts. Over the past few years, we have conducted evaluations and issued a number of reports related to the security of the U.S. border both at and between ports of entry (see Related GAO Products section at end of this report). In some instances, our reports included recommendations addressing vulnerabilities in border security—including the northern border—while in other cases, our reports and recommendations were more general, but when implemented, would provide benefit to the northern border. We reviewed recommendations resulting from GAO evaluations conducted from fiscal years 2005 through 2008 and identified 11 reports containing 50 recommendations that had potential to address vulnerabilities in border security, or to address weaknesses in key initiatives. At the time of our review, DHS had implemented 11 of these 50 recommendations. For example, DHS implemented a recommendation to formalize a performance measure for the traveler inspection program that would help agency management and Congress monitor effectiveness in apprehending inadmissible aliens and other violators. However, 39 recommendations from the 11 GAO reports are still open. In some cases, recommendations were open because DHS and other federal agencies had not yet had time to implement them. For example, 18 of the 39 open recommendations were from GAO reports issued within the last fiscal year. In regard to the remaining 21 recommendations, DHS and other agencies agreed to take action, but at least 1 and, in some cases, over 3 years have passed without full implementation. Standards for Internal Control in the Federal Government state that agencies are to ensure that findings of audits and other reviews are promptly resolved. The time necessary to resolve recommendations varies depending on the type of action required. However, DHS does not have a transparent process to show how long it will take to implement each recommendation considering the resources, risk level, and complexity of effort required. Timely implementation of recommendations would help address vulnerabilities related to a variety of border security initiatives. Some of the older recommendations that have not been fully implemented include those to improve screening of travelers at ports of entry to ensure legal entry, preclude cross-border transport of illicit nuclear materials, reduce risks in delivering key technology for border surveillance and for information sharing, and increase information sharing and coordination among federal, state, and local law enforcement agencies, as shown in table 2 and appendix I. We believe that these outstanding recommendations continue to have merit and should be implemented. Federal agency reporting requirements, such as those contained in the 9/11 Act, can provide Congress with important information for debating policy and allocating scarce resources, and the level of agency responsiveness can either support or hinder these efforts. While the DHS report to Congress provided information on the status of its efforts, there is little sense of the relative effect these efforts have had in protecting the northern border, and what additional action or resources may be needed in the future. Requirements in law to periodically assess the status of northern border security provide DHS with additional opportunity to highlight information that can best meet congressional needs. Completion of DHS efforts to develop a northern border strategic plan and risk management process to prioritize action and funding could lead to better understanding among DHS, its component agencies, and Congress in determining whether resources are most effectively allocated across initiatives, border locations, and responsible agencies. However, balancing current and future funding for border security will remain challenging as the resource needs for the northern border will continue to evolve in response to the relative success of new initiatives. In the meantime, implementing recommendations for improving border security in a more timely fashion would help reduce the nation’s risk due to unaddressed vulnerabilities. To provide Congress with information that will facilitate policy discussions and resource decisions for northern border security, we recommend that for future reporting requirements the Secretary of Homeland Security include more specific information on the actions, resources, and time frame needed to improve security of the northern border along with any attendant uncertainties, and the basis used to prioritize action and resources for northern border security relative to other areas of national security. We requested comments on a draft of this report from the Secretary of Homeland Security and Attorney General. In its response, DHS and CBP agreed with our recommendation and stated that CBP will work with the department to implement it through the approved budget process. DOJ did not provide formal comments. In its comments, DHS stated that our report said DHS and other agencies should proceed to adopt and address all of the recommendations from previous reports without any assessment of priority based on risk. Our intent in discussing these recommendations was to point out potential security vulnerabilities that exist, not to imply that all of these recommendations were of equal importance, or that risk-based prioritization should not be applied when addressing them. GAO has advocated the use of risk management principles, and using them to sequence actions on open recommendations would seem to be appropriate. We have added language to clarify that while the definition of timely implementation will vary across recommendations, DHS lacks a transparent process to show how long it will take to implement each recommendation considering the resources, risk level, and complexity of effort required. DHS’s comments are reprinted in appendix II. DHS and DOJ also offered technical comments, which we considered and incorporated where appropriate. We are providing copies of this report to the Senate and House committees that have authorization and oversight responsibilities for homeland security. We are also sending copies to the Secretary of Homeland Security, the Attorney General, and other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777, or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In the past, GAO has offered numerous recommendations to the Department of Homeland Security (DHS) related to border security. Although previous recommendations are not specific to the northern border, many touch on different aspects that do affect various elements of northern border security. Many recommendations made by GAO concerning general border security, nuclear security, technology, and interagency cooperation and information sharing, have yet to be implemented by DHS. Fully implementing these recommendations could provide great benefits to DHS and the nation in terms of strengthening general border security, and by extension, security of the northern border. Tables 3 through 6 detail 39 selected open recommendations related to border security vulnerabilities. In addition to the contact named above, Cindy Ayers, Assistant Director, and Adam Couvillion, Analyst-in-Charge, managed this assignment. David Holt made significant contributions to the work. Amanda Miller and Michele Fejfar assisted with design, methodology, and data analysis. Linda Miller provided assistance in report preparation; and Frances Cook provided legal support. Secure Border Initiative: DHS Needs to Address Significant Risks in Delivering Key Technology Investment. GAO-08-1086. Washington, D.C.: September 22, 2008. Secure Border Initiative: Observations on Deployment Challenges. GAO- 08-1141T. Washington, D.C.: September 10, 2008. Risk Management: Strengthening the Use of Risk Management Principles in Homeland Security. GAO-08-904T. Washington, D.C.: June 25, 2008. Nuclear Security: NRC and DHS Need to Take Additional Steps to Better Track and Detect Radioactive Materials. GAO-08-598. Washington, D.C.: June 19, 2008. Border Security: Summary of Covert Tests and Security Assessments for the Senate Committee on Finance, 2003-2007. GAO-08-757. Washington, D.C.: May 16, 2008. Homeland Security: DHS Has Taken Actions to Strengthen Border Security Program and Operations, but Challenges Remain. GAO-08-542T. Washington, D.C.: March 6, 2008. Border Security: Despite Progress, Weaknesses in Traveler Inspections Exist at Our Nation’s Ports of Entry. GAO-08-329T. Washington, D.C.: January 3, 2008. Border Security: Despite Progress, Weaknesses in Traveler Inspections Exist at Our Nation’s Ports of Entry. GAO-08-219. Washington, D.C.: November 5, 2007. Homeland Security: Federal Efforts Are Helping to Alleviate Some Challenges Encountered by State and Local Information Fusion Centers. GAO-08-35. Washington, D.C.: October 30, 2007. Secure Border Initiative: Observations on Selected Aspects of SBInet Program Implementation. GAO-08-131T. Washington, D.C.: October 24, 2007. Terrorist Watch List Screening: Opportunities Exist to Enhance Management Oversight, Reduce Vulnerabilities in Agency Screening Processes, and Expand Use of the List. GAO-08-110. Washington, D.C.: October 11, 2007. Border Security: Despite Progress, Weaknesses in Traveler Inspections Exist at Our Nation’s Ports of Entry. GAO-08-123SU. Washington, D.C.: October 5, 2007, SBU. Border Security: Security Vulnerabilities at Unmanned and Unmonitored U.S. Border Locations. GAO-07-884T. Washington, D.C.: September 27, 2007. Combating Nuclear Smuggling: Additional Actions Needed to Ensure Adequate Testing of Next Generation Radiation Detection Equipment. GAO-07-1247T. Washington, D.C.: September 18, 2007. Border Security: Security of New Passports and Visas Enhanced, but More Needs to Be Done to Prevent Their Fraudulent Use. GAO-07-1006. Washington, D.C.: July 31, 2007. Border Security: Long-term Strategy Needed to Keep Pace with Increasing Demand for Visas. GAO-07-847. Washington, D.C.: July 13, 2007. Border Security: US-VISIT Program Faces Strategic, Operational, and Technological Challenges at Land Ports of Entry. GAO-07-378T. Washington, D.C.: January 31, 2007. Border Security: US-VISIT Program Faces Strategic, Operational, and Technological Challenges at Land Ports of Entry. GAO-07-248. Washington, D.C.: December 6, 2006. Border Security: US-VISIT Program Faces Strategic, Operational, and Technological Challenges at Land Ports of Entry. GAO-07-56SU. Washington, D.C.: November 13, 2006 SBU. Homeland Security: Opportunities Exist to Enhance Collaboration at 24/7 Operations Centers Staffed by Multiple DHS Agencies. GAO-07-89. Washington, D.C.: October 20, 2006. Border Security: Stronger Actions Needed to Assess and Mitigate Risks of the Visa Waiver Program. GAO-06-1090T. Washington, D.C.: September 7, 2006. Border Security: Continued Weaknesses in Screening Entrants into the United States. GAO-06-976T. Washington, D.C.: August 2, 2006. Combating Nuclear Smuggling: DHS Has Made Progress Deploying Radiation Detection Equipment at U.S. Ports-of-Entry, but Concerns Remain. GAO-06-389. March 22, 2006. Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. December 15, 2005. Border Security: Opportunities to Increase Coordination of Air and Marine Assets. GAO-05-543. August 12, 2005.
Covering nearly 4,000 miles of land and water from Washington to Maine, the U.S.-Canadian border is the longest undefended border in the world. Various Department of Homeland Security (DHS) component agencies share responsibility for northern border security, primarily U.S. Customs and Border Protection (CBP), in collaboration with other federal, state, local, tribal, and Canadian entities. The Implementing Recommendations of the 9/11 Act of 2007 required the Secretary of Homeland Security to submit a report to Congress that addresses the vulnerabilities along the northern border, and provides recommendations and required resources to address them. The act also required GAO to review and comment on this report. In response to this mandate, GAO examined (1) the extent to which the DHS report to Congress is responsive to the legislative requirements and (2) actions that may be necessary to address northern border vulnerabilities in addition to the actions addressed in the report. To conduct this work, GAO reviewed DHS plans, reports, and other documents, and interviewed DHS officials. The DHS February 2008 report to Congress is not fully responsive to legislative requirements in providing information for improving northern border security. In particular, DHS provided a listing of northern border vulnerabilities and initiatives to address them, but did not include recommendations and additional resources that are needed to protect the northern border. DHS officials provided several reasons for the lack of specificity and gaps in reported information, including the fact that the component agencies' priorities for action and resources are reflected in the existing budget process, and that they had nothing further to recommend or request through this report. However, budget documents do not reflect the resources needed over time to achieve control of the northern border. The lack of this information makes it difficult for Congress to consider future actions and resources needed. DHS is developing northern border strategic plans and a risk-management process to help guide and prioritize action and resources, and fully implementing recommendations from past GAO evaluations would also provide benefit in addressing northern border security vulnerabilities. DHS is currently developing strategic plans that are intended to provide overall direction in addressing vulnerabilities in northern border security. DHS is also developing a risk-management process to assist in prioritizing efforts and resources that will provide greatest benefit to national security. DHS officials have said that the success of various pilot projects, such as DHS's testing of new technology, will likely change the level and mix of resources needed to protect the northern border. In the meantime, DHS could take action to reduce vulnerabilities by implementing recommendations made in past evaluations. DHS has implemented 11 GAO recommendations designed to improve border security, but 39 recommendations are yet to be fully addressed. Eighteen of these open recommendations were made within the last year. However, 21 recommendations for improving use of air and marine assets, improving screening processes at the ports of entry, and deploying nuclear detection equipment--which DHS and other agencies generally agreed to take action to implement--have remained open for at least 1 year and, in some cases, over 3 years. GAO believes these outstanding recommendations continue to have merit and should be implemented.